Switch from FreeNAS over to OMV

  • Hi there, after lots of issues with FreeNAS, after being annoyed of not having the remote shell I am used to from Linux systems, and much more...

    ...I am thinking of switching my NAS at home to OMV.

    Now a big thing ahead: I want my stuff to be encrypted. There seems to be a plugin for that in OMV? I did not find a guide to create the volumes with encryption though?


    Any link would be nice here.


    About RAID: I have raidz-2 right now, so raid5 would be corresponding. As FN uses the raid card in IT mode, it should be the same for OMV I suppose?


    Lots of questions, maybe some are answered already? Sorry for not finding them at this point...


    Best regards

    David

  • IT mode is good, yes.


    no problem to import your pool, but keep in mind that ZFS webGUI is crappy, so most of your operations can be done by shell ( not a real problem, but you must know).



    Please do not UPGRADE ZFS pool until you arfe totally sure , you do not need to revert to FreeNAS or XigmaNAS (ZFS , BSD Based), to avoid compatibilities.

    • Offizieller Beitrag

    ZFS webGUI is crappy

    It may be missing a few features and needs improvements but I wouldn't say it is crappy.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks for the quick reply!

    The thing about ZFS: I don't necessarily need to be on ZFS.

    What is the default technology used by OMV? How is it handling the (SW)Raid?

    • Offizieller Beitrag

    What is the default technology used by OMV? How is it handling the (SW)Raid?

    mdadm is the default for OMV and most popular Linux option. btrfs also has software raid but I probably wouldn't use it for raid5.


    Another popular option with OMV (to avoid the complexities of raid) is to pool your drives with mergerfs (unionfilesystems plugin) and then have manual parity runs with snapraid (has many features comparable to zfs but not realtime raid).

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • mdadm sounds like a good way to go. Also encryption is easy to implement here?

    Then I'll get some big HDDs and copy off all stuff from the current FN setup and start fresh with OMV :)

    • Offizieller Beitrag

    Also encryption is easy to implement here?

    mdadm does not have an encryption option. LUKS (plugin available) is what people typically use on Linux. Since LUKS just creates a block device, you can put your filesystem on mdadm on LUKS or filesystem on LUKS on mdadm.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • So that one should work then? Sounds good! So a setup that is typically seen also on Ubuntu, with LVM -> LUKS -> Filesystem?

    Just read about the plugin here on the forum.

    So it's time to swap system these days! Thanks for all the input!

    • Offizieller Beitrag

    So a setup that is typically seen also on Ubuntu, with LVM -> LUKS -> Filesystem?

    I would not use LVM unless you are running hardware raid.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • It may be missing a few features and needs improvements but I wouldn't say it is crappy.

    sorry, English are not native to me and do not know how to say " regulero, no bien diseñado, o no tan completo como lo tiene FreeNAS", sorry again

    • Offizieller Beitrag

    do not know how to say " regulero, no bien diseñado, o no tan completo como lo tiene FreeNAS",

    I would say that it is not feature complete or as polished as FreeNAS. But since OMV and the zfs plugin are not a commercially supported product with hundreds of people backing it, I would not expect this either.

  • Hi there, I just created a temporary OMV with some old HDDs to save all stuff for the installation of my main NAS.

    Important stuff is backed up already, but lots of big files ~9TB need a temporary NAS...

    Now I wanted to create a RAID on the temp NAS but get this error:

    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-mkraid /dev/md0 -l stripe -n 5 -N asd /dev/sdj /dev/sdb /dev/sdd /dev/sdi /dev/sda 2>&1' with exit code '1': mdadm: chunk size defaults to 512K mdadm: super1.x cannot open /dev/sdb: Device or resource busy mdadm: /dev/sdb is not suitable for this array. mdadm: super1.x cannot open /dev/sdd: Device or resource busy mdadm: /dev/sdd is not suitable for this array. mdadm: super1.x cannot open /dev/sdi: Device or resource busy mdadm: /dev/sdi is not suitable for this array. mdadm: super1.x cannot open /dev/sda: Device or resource busy mdadm: /dev/sda is not suitable for this array. mdadm: create aborted

    Any hint here?

    Thanks!

    • Offizieller Beitrag

    Any hint here?

    Did you wipe each drive first? You can do that from the Physical Disks tab or wipefs -a /dev/sdX from the command line. If they had zfs on them previously, you may need to wipe the zfs signatures multiple times with wipefs.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hi, I tried to do what you mentioned from console. (tried a couple times to wipefs) Here is also a more detailed error from webUI:

    Does this help?


    EDIT: I tried to create RAID then wipe again, getting a weird message on one drive:

    Code
    root@omv-temp:~# wipefs -a /dev/sda
    /dev/sda: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
    root@omv-temp:~# wipefs -a /dev/sdb
    /dev/sdb: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
    root@omv-temp:~# wipefs -a /dev/sdi
    /dev/sdi: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
    root@omv-temp:~# wipefs -a /dev/sdj
    wipefs: error: /dev/sdj: probing initialization failed: Device or resource busy
    root@omv-temp:~# wipefs -a /dev/sdd
    /dev/sdd: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9

    EDIT again as nobody replied yet:

    Formatting the drive singularly right now so I am able to move my data from my Freenas.

    Will then hopefully work better on the FN box ;)


    EDIT once more:

    I managed to create RAID and encrypted volumes(?) on it with:

    mdadm --zero-superblock /dev/sdX

    echo 1 > /sys/module/raid0/parameters/default_layout


    Then all seems to work fine!

    • Offizieller Beitrag

    It is hard to say what caused the issue. Looks like sdj was part of an array and failed to wipe. On your real system, I would wipe the disks with wipefs and make sure repeated runs of wipefs return nothing. Then I would reboot to make sure the drives are not part of any arrays. Then create the new array.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!