Posts by savellm

    @savellm : how can i do that ?
    what i need to put ?

    @macom : you don't see error ? ( perhaps there are not, if you don't see it )

    I mean just unplug all your other hard drives other than the one you are going to install OMV onto.
    So all you will have connected to your server is your boot drive (HDD or SSD or USB (not recommended)) and the installer.

    Try that without any other drives attached, try to do the installation then.

    Hey guys,

    So I just built a new server.
    2x E5-2620-v4 (32 cores)
    128gb DDR4 ECC ram

    Main pool: ZFS and
    Docker is on a MDADM RAID mirror with 2x 1TB SSD's - EXT4.
    Docker, also has the docker /config folders and the transcode for Plex.

    Plex has the same PUID and PGID of the directories, and when trying to transcode it creates the \transcode\Transcode\Sessions directories all on its own with the same permissions.

    But transcode never works it just stalls and doesnt do anything then fails.

    The second issue is I installed LibreELEC -> Plex addon on my Intel NUC8i3BEK.
    Plex constantly shudders, and buffers every second pausing for about 10 seconds at a time.
    It just doesnt play, 4k, 1080p you name it, nada.
    BUT if I add the directories to Kodi itself, it plays flawlessly without hiccup at all.

    Does anyone have any clue to the above?
    I read stuff and someone mentioned exec option in /etc/ftsab?
    This is my current fstab, and I'm not sure what to change:

    # /etc/fstab: static file system information.
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    # <file system> <mount point> <type> <options> <dump> <pass>
    # / was on /dev/sda2 during installation
    UUID=214f864f-06b1-4a54-b21c-77372d3e460c / ext4 errors=remount-ro 0 1
    # /boot/efi was on /dev/sda1 during installation
    UUID=5405-32BE /boot/efi vfat umask=0077 0 1
    # swap was on /dev/sda3 during installation
    UUID=8665f0fb-0ce3-40f6-aa45-933ea3eae975 none swap sw 0 0
    /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
    tmpfs /tmp tmpfs defaults 0 0
    # >>> [openmediavault]
    /dev/disk/by-label/jailhouse /srv/dev-disk-by-label-jailhouse ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,,jqfmt=vfsv0,discard,acl 0 2
    # <<< [openmediavault]

    EDIT: If I play on my mobile phone using Plex client, 4k plays perfectly no issues (Direct Play, not transcoding. Transcoding still doesnt work on mobile)
    Also I followed TechnoDads Vids for Plex Docker.

    EDIT2: I changed:
    /dev/disk/by-label/jailhouse /srv/dev-disk-by-label-jailhouse ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,,jqfmt=vfsv0,discard,acl 0 2
    /dev/disk/by-label/jailhouse /srv/dev-disk-by-label-jailhouse ext4 defaults,nofail,user_xattr,exec,usrjquota=aquota.user,,jqfmt=vfsv0,discard,acl 0 2

    Rebooted server and it actually looks like its working ok now :)

    What I did in my case was to remove the ZFS pool for dockers and recreate with just ext4, no issues going forward. Main pool is still ZFS.

    Ok so back on the filesystem train.

    Going to test out BTRFS. Question do I have to set it up in CLI or can I use the UI for BTRFS now?

    I see I can create mdadm raid 6 and then BTRFS as the filesystem. Is this the correct way? Or am do I need to create a BTRFS raid via CLI first?

    It seems that there is no proper good qualified way to setup ZFS in OMV.
    Everyone has different opinions and there is no clear course.

    And as BTRFS is becoming the norm in OMV6.x wouldnt it be wise to just go that way now?
    Be prepared for the future?

    Yeah I was going to do just daily snapshots too.
    Can you see them in the GUI and restore/delete snapshots from the UI after doing the cron job?

    Lastly I have 2x 250gb SSD's (1 is used as the boot drive) the other is spare that I want to do a clone of my boot drive too incase it dies then I can just saw the drive over.
    Is clonezilla the way to do this? Can I setup Clonezilla to automate this or do I have to boot into it each time?

    EDIT: So redid my entire server from OS to everything.
    Followed your guide but trying Proxmox kernel again.

    I created one docker with netdata and just let it run overnight went to bed. Woke up this morning and see:
    I didnt create any manual snapshots or anything, this was all auto, is this right?

    If I try to delete one of those 'Clone'
    I get this:

    No such Mntent exists
    Error #0:OMVModuleZFSException: No such Mntent exists in /usr/share/omvzfs/Utils.php:86Stack trace:#0 /usr/share/openmediavault/engined/rpc/ OMVModuleZFSUtil::deleteOMVMntEnt(Array, Object(OMVModuleZFSFilesystem))#1 [internal function]: OMVRpcServiceZFS->deleteObject(Array, Array)#2 /usr/share/php/openmediavault/rpc/ call_user_func_array(Array, Array)#3 /usr/share/php/openmediavault/rpc/ OMV\Rpc\ServiceAbstract->callMethod('deleteObject', Array, Array)#4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('ZFS', 'deleteObject', Array, Array, 1)#5 {main}

    Are you an Australian ? :)

    Close I'm South African but living in London :D

    So just wondering if i did a clean install which route would be the best/preferred method for longevity?
    Everywhere I read they mentioned Proxmox kernel as it has built in support.

    If I'm starting again I'd like to try and do it the preferred most supported method.

    Are you an Australian ? :)

    The current OMV kernel (4.19) is a Debian back ports kernel. Proxmox is based on Ubuntu 4.15 (I think). Back ports kernels are the standard in OMV. They run the latest hardware with fewer issues. As an example, I think the "standard" kernel for Debian 9 (stretch) is 4.9.

    Sure. That's what I'm using. I certainly won't hurt to try it. But if you go this route and everything works as it should, put a hold on the kernel so there are no automatic upgrades. (OMVExtra's has a provision for this.)

    Close I'm South African but living in London :D

    So just wondering if i did a clean install which route would be the best/preferred method for longevity?
    Everywhere I read they mentioned Proxmox kernel as it has built in support.

    If i do a clean install, and do it properly which method would be the better tried and true way?

    Mate amazing! @crashtest

    I will start wiping and reinstalling tonight.
    When you say:
    **On the other hand, since you're having difficulty going the Proxmox Kernel route, maybe going with the standard back ports kernel might be the way to go. I'm using kernel 4.19 with no problems.**
    What do you mean by back ports kernel?

    As in just leave the kernel that comes with OMV and install ZFS and let it compile the headers?

    Lastly what does the testing repo do?

    Thanks for the reply.

    So 2 pools, 2x 1tb SSD mirror for downloads and dockers and stuff
    and 12x 8tb Raid Z2 for main storage.

    The vault (main storage) was created second yes, but this one isnt even creating snapshots.
    The first pool (jailhouse) for downloads and stuff was created first and this one has snapshots and I cannot delete them.

    I have setup stuff but nothing I cant re-do and all my stuff is still on my old server.
    I can blow away the whole OMV and restart, but I just didnt really have a good guide on how to do it, so followed the video for ZFS and installed proxmox kernel and just went from there.

    So if there is a beginners guide on correct setup, im all for it :)

    Unfortunately, the above pool attributes need to be applied to the pool before you move data into it. (They only apply to data added to the pool, after they are applied.)
    Otherwise, your files will have mixed attributes which, from a permissions perspective, can cause odd issues.
    -- This makes me think im going to be starting again regardless :D

    Hey guys,

    Are there any ZFS settings I should be using or doing?
    I just setup a new OMV and installed ZFS and not really done anything else.

    I see my one ZFS pool is doing snapshots and the other isnt.
    The only extra thing I added was Compression: lz4

    Is there anything else I should do or use?

    And here you can see the one pool is doing snapshots and the other isnt.

    Lastly I cannot delete these snapshots either.