Switch from FreeNAS over to OMV

  • Hi there, after lots of issues with FreeNAS, after being annoyed of not having the remote shell I am used to from Linux systems, and much more...

    ...I am thinking of switching my NAS at home to OMV.

    Now a big thing ahead: I want my stuff to be encrypted. There seems to be a plugin for that in OMV? I did not find a guide to create the volumes with encryption though?


    Any link would be nice here.


    About RAID: I have raidz-2 right now, so raid5 would be corresponding. As FN uses the raid card in IT mode, it should be the same for OMV I suppose?


    Lots of questions, maybe some are answered already? Sorry for not finding them at this point...


    Best regards

    David

  • IT mode is good, yes.


    no problem to import your pool, but keep in mind that ZFS webGUI is crappy, so most of your operations can be done by shell ( not a real problem, but you must know).



    Please do not UPGRADE ZFS pool until you arfe totally sure , you do not need to revert to FreeNAS or XigmaNAS (ZFS , BSD Based), to avoid compatibilities.

  • What is the default technology used by OMV? How is it handling the (SW)Raid?

    mdadm is the default for OMV and most popular Linux option. btrfs also has software raid but I probably wouldn't use it for raid5.


    Another popular option with OMV (to avoid the complexities of raid) is to pool your drives with mergerfs (unionfilesystems plugin) and then have manual parity runs with snapraid (has many features comparable to zfs but not realtime raid).

    omv 5.5.11 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • mdadm sounds like a good way to go. Also encryption is easy to implement here?

    Then I'll get some big HDDs and copy off all stuff from the current FN setup and start fresh with OMV :)

  • Also encryption is easy to implement here?

    mdadm does not have an encryption option. LUKS (plugin available) is what people typically use on Linux. Since LUKS just creates a block device, you can put your filesystem on mdadm on LUKS or filesystem on LUKS on mdadm.

    omv 5.5.11 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • So that one should work then? Sounds good! So a setup that is typically seen also on Ubuntu, with LVM -> LUKS -> Filesystem?

    Just read about the plugin here on the forum.

    So it's time to swap system these days! Thanks for all the input!

  • It may be missing a few features and needs improvements but I wouldn't say it is crappy.

    sorry, English are not native to me and do not know how to say " regulero, no bien diseñado, o no tan completo como lo tiene FreeNAS", sorry again

  • do not know how to say " regulero, no bien diseñado, o no tan completo como lo tiene FreeNAS",

    I would say that it is not feature complete or as polished as FreeNAS. But since OMV and the zfs plugin are not a commercially supported product with hundreds of people backing it, I would not expect this either.

    omv 5.5.11 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Hi there, I just created a temporary OMV with some old HDDs to save all stuff for the installation of my main NAS.

    Important stuff is backed up already, but lots of big files ~9TB need a temporary NAS...

    Now I wanted to create a RAID on the temp NAS but get this error:

    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-mkraid /dev/md0 -l stripe -n 5 -N asd /dev/sdj /dev/sdb /dev/sdd /dev/sdi /dev/sda 2>&1' with exit code '1': mdadm: chunk size defaults to 512K mdadm: super1.x cannot open /dev/sdb: Device or resource busy mdadm: /dev/sdb is not suitable for this array. mdadm: super1.x cannot open /dev/sdd: Device or resource busy mdadm: /dev/sdd is not suitable for this array. mdadm: super1.x cannot open /dev/sdi: Device or resource busy mdadm: /dev/sdi is not suitable for this array. mdadm: super1.x cannot open /dev/sda: Device or resource busy mdadm: /dev/sda is not suitable for this array. mdadm: create aborted

    Any hint here?

    Thanks!

  • Any hint here?

    Did you wipe each drive first? You can do that from the Physical Disks tab or wipefs -a /dev/sdX from the command line. If they had zfs on them previously, you may need to wipe the zfs signatures multiple times with wipefs.

    omv 5.5.11 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Hi, I tried to do what you mentioned from console. (tried a couple times to wipefs) Here is also a more detailed error from webUI:

    Does this help?


    EDIT: I tried to create RAID then wipe again, getting a weird message on one drive:

    Code
    root@omv-temp:~# wipefs -a /dev/sda
    /dev/sda: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
    root@omv-temp:~# wipefs -a /dev/sdb
    /dev/sdb: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
    root@omv-temp:~# wipefs -a /dev/sdi
    /dev/sdi: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
    root@omv-temp:~# wipefs -a /dev/sdj
    wipefs: error: /dev/sdj: probing initialization failed: Device or resource busy
    root@omv-temp:~# wipefs -a /dev/sdd
    /dev/sdd: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9

    EDIT again as nobody replied yet:

    Formatting the drive singularly right now so I am able to move my data from my Freenas.

    Will then hopefully work better on the FN box ;)


    EDIT once more:

    I managed to create RAID and encrypted volumes(?) on it with:

    mdadm --zero-superblock /dev/sdX

    echo 1 > /sys/module/raid0/parameters/default_layout


    Then all seems to work fine!

  • It is hard to say what caused the issue. Looks like sdj was part of an array and failed to wipe. On your real system, I would wipe the disks with wipefs and make sure repeated runs of wipefs return nothing. Then I would reboot to make sure the drives are not part of any arrays. Then create the new array.

    omv 5.5.11 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!