Prefered disk setup with 2 new disks

  • Hi,


    So I am forced to setup a new storage with 2 ironwolf 4TB disks for media files etc. My previous setup was a mdam raid 1 ext4 with 2 wd red 3 TB disks. One of those disks is dying as we speak. What would be the preferred way with the options currently available tech wise in omv for setting it up? I would like to have bitrot protection and a easy set it and forget it system. I am leaning towards btrfs build in raid1. The only problem is, for as far as I can tell, there is no plugin available in omv that handles all the advanced features so I am stuck with the cli (not preferred).

    So my questions are:


    - What is the preferred disks setup for storing media files with bitrot protection?

    - Will there be a btrfs build in raid1 plugin available that manages all the advanced features?

    - Any idea's or tips in general?


    Thank you.

  • Setting up btrfs and maintenance utilities using the command line takes less than 30 minutes. After that you won't have to use the command line again unless you decide to add/remove/change a drive, or decide to change the raid type or compression. Everything else (shares, etc.) can be done through the OMV interface.

  • Setting up btrfs and maintenance utilities using the command line takes less than 30 minutes. After that you won't have to use the command line again unless you decide to add/remove/change a drive, or decide to change the raid type or compression. Everything else (shares, etc.) can be done through the OMV interface.

    I have very little experience doing that. What if I run into problems later on... Would be easier troubleshooting with the help of a gui.

  • For problems this forum is an excellent source of help.


    The other file system that meets your requirements is zfs. It’s not as flexible as btrfs but those that use it swear by it.


    In my opinion, whatever system you choose, you will be best served by asking for help on the forum, and you will most likely have to use the command line for troubleshooting and repair.

  • One option could be to use one drive as main drive with ext4 (or btrfs, if you like) and then make a backup of the main drive to the second drive using borg backup (plugin available). As I understood from some posts from ryecoaaron borg backup supports compression, de-duplication and bitrot detection.

  • For problems this forum is an excellent source of help.


    The other file system that meets your requirements is zfs. It’s not as flexible as btrfs but those that use it swear by it.


    In my opinion, whatever system you choose, you will be best served by asking for help on the forum, and you will most likely have to use the command line for troubleshooting and repair.


    Ah yes. omv does have a zfs plugin. Regarding flexibility that is not really a problem. If I need more space in the future I can repeat the steps of creating another zfs mirror. Am I correct in assuming that a zfs mirror has bitrot protection, balancing, silent data corruption detection and recovery/checksums (file self-healing) and the ability to make snapshots? Do you have a link where this is all explained?


    One option could be to use one drive as main drive with ext4 (or btrfs, if you like) and then make a backup of the main drive to the second drive using borg backup (plugin available). As I understood from some posts from ryecoaaron borg backup supports compression, de-duplication and bitrot detection.

    Thank you for the advice. I prefer not to add another ''task'' , in this case backing up/sync.

  • Ah yes. omv does have a zfs plugin. Regarding flexibility that is not really a problem. If I need more space in the future I can repeat the steps of creating another zfs mirror. Am I correct in assuming that a zfs mirror has bitrot protection, balancing, silent data corruption detection and recovery/checksums (file self-healing) and the ability to make snapshots? Do you have a link where this is all explained?

    yes, your asumtions are good.


    more info about ZFS: [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

  • borg backup supports compression, de-duplication and bitrot detection.

    correct.


    I prefer not to add another ''task'' , in this case backing up/sync.

    A zfs mirror is not backup though. Use borg would get you everything.

    omv 5.6.13 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.2 | kvm plugin 5.1.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • A zfs mirror is not backup though. Use borg would get you everything.

    I was aware of that. Borg is unfamiliar to me. If it has silent data corruption detection and recovery/checksums (file self-healing) than that would be cool. Will look into it. :thumbup:


    Perhaps use that wd red 3TB drive that is still good as a backup drive for my most important files? And later on replace it with a new one when I have money again haha.

  • Ok I have done the following:


    -created a zfs mirror.

    -added a user with rw permissions on all the shared folders.

    -enabled nfs and created shares with rw privileges.

    -on my manjaro machine I use autofs and mounted all the shares. They are visible and I can create a folder on them.

    - on the zfs mirror I cannot remove files when I have copied them using cockpit from the mdam array to the new zfs mirror with the cp -r command.

    -When I check in my filemanager on my manjaro machine what the permissions are it says root.

    - in omv under shared folder>acl I cannot change the permissions to magician -rw- on the (zfs mirror) shared folder. It gives me the following error:

    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; setfacl --remove-all -M '/tmp/setfaclrzKxbx' -- '/zfsmirror/PresciousData2/' 2>&1' with exit code '1': setfacl: /zfsmirror/PresciousData2/: Operation not supported

    -I close that window and after a few seconds the permission does change (to magician-100) and I can delete files from within dolphin file manager.


    Does anyone have some insight regarding this behavior? Do I need to run a command to change permissions permanently on the zfs mirror (shared folder)?

  • Additional question. Is the proxmox pve-kernel still preferred or is the default kernel that comes with omv ''up to date'' regarding zfs? (Using the default at the moment). Or is it so that you only install the pve-kernel when you are running omv on top of proxmox? And will there be any problems installing packages and plugins compared to the default kernel? The topic [HOWTO] Instal ZFS-Plugin & use ZFS on OMV is a very long read. In the omv gui under extras I see:


    Proxmox kernel:

    • This will enable the Proxmox 6.x repo.
    • This will install the latest 5.4 kernel.

    Proxmox test kernel:

    • This will enable the Proxmox 6.x repo.
    • This will install the latest 5.11 kernel.

    So wich one to choose?


    Answer found.

    Messing around a bit more I've found these options in ''edit'' under zfs:


    -zfsmirror aclmode discard default

    -zfsmirror aclinherit restricted default


    Do I need to modify this?


    After some messing around and searching this forum I will be switching to btrfs raid1 as suggested by:


    Setting up btrfs and maintenance utilities using the command line takes less than 30 minutes. After that you won't have to use the command line again unless you decide to add/remove/change a drive, or decide to change the raid type or compression. Everything else (shares, etc.) can be done through the OMV interface.

    Zfs without ecc memory is not recommended according to the internet, although there are many users doing it without problems (yet) for longer periods. I am going to play it safe. Btw I have zero experience with zfs and already ran into some problems playing around with it. Currently I have a virtualbox setup running as a test to configure omv as i see fit. Will update my findings asap.

  • Will give that a read. Indeed the goal is to create a btrfs raid1 setup not a mdam raid 1.

    This afternoon I've been doing the following:


    From putty or cockpit:


    Code
    root@omvvm:~# lsblk


    Code
    root@omvvm:~# mkfs.btrfs -L data -d raid1 -m raid1 -f /dev/sdb /dev/sdc


    Omv gui refresh. The array is displayed. Mounted it through gui. At first the total capacity of the 2 disk combined is displayed but that changes after a few second automatically in half.




    Some btrfs commands testing:



    Code
    root@omvvm:~# btrfs balance start -v --full-balance /srv/dev-disk-by-id-ata-VBOX_HARDDISK_VBba0d6228-f189346f
    Dumping filters: flags 0x7, state 0x0, force is off
    DATA (flags 0x0): balancing
    METADATA (flags 0x0): balancing
    SYSTEM (flags 0x0): balancing
    Done, had to relocate 3 out of 3 chunks


    Code
    root@omvvm:~# umount /dev/sdb



    Code
    root@omvvm:~# mount /dev/sdb


    Code
    root@omvvm:~# umount /dev/sdb
    Code
    root@omvvm:~# btrfs scrub start -B /dev/sdb
    scrub done for 25d172ff-e8b8-4c0a-be89-887a19e80c26
    scrub started at Wed Sep 1 17:03:04 2021 and finished after 00:00:00
    total bytes scrubbed: 448.00KiB with 0 errors


    Code
    root@omvvm:~# btrfs scrub status -d /dev/sdb
    scrub status for 25d172ff-e8b8-4c0a-be89-887a19e80c26
    scrub device /dev/sdb (id 1) history
    scrub started at Wed Sep 1 17:03:04 2021 and finished after 00:00:00
    total bytes scrubbed: 448.00KiB with 0 errors


    Under gui:


    - create shared folder. Set privaleges and acl (checkmark user magician rw rights, owner change from root to magician)

    - create nfs share with rw rights

    - setup autofs accordingly

    - test filetransfer and ownership of share

    - create cronjobs under scheduled jobs:

    1.

    Code
    systemctl stop docker

    or

    Code
    docker stop -t 30 container_name_or_id

    (gracefully stops container/stack in portainer??)


    2.

    Code
    btrfs check --force --readonly -p /dev/sdb

    (at midnight once a month-email result for review)


    3.

    Code
    btrfs scrub start -B /dev/sdb ; btrfs scrub status -d /dev/sdb ; btrfs device stats /srv/dev-disk-by-id-ata-VBOX_HARDDISK_VBba0d6228-f189346f

    (following day at midnight once a month-email result)


    4.

    Code
    systemctl start docker

    or

    Code
    docker start container_name_or_id

    (the next morning starts portainer including qbittorrent container/stack??)



    Do I need to stop portainer before a check and scrub of the data storage and is the above command correct?

    Are the above steps the way to go? Any advice?


    Thank you.


    Update:

    Just finished the final edit of this post. doscott

  • All of my systems other than OMV use openSUSE Tumbleweed on btrfs. I used to use VirtualBox and now use KVM. However, I have never used btrfs on a virtual drive. I have read of there being issues with virtual btrfs “disks” on improper shutdown of the machines.


    That said, this is for testing so it looks good, except for your scrub command. Scrub normally runs in the background. Scrub status gives a point in time status. Since -B prevents running in the background there is not much point in getting the status after it has stopped. I would suggest using the scripts mentioned in the link I provided.


    I haven’t used portainer so maybe I am missing something and the two disks with btrfs are not virtual? In any case, scrubbing and balancing, as well as modifying most btrfs options, are done live; you do not have to unmount or stop using them.

  • That will put a btrfs system on an mdadm managed system.

    No it won't. You just can't create a btrfs array from OMV's web interface. There may be a guide out there that tells you to use mdadm but it won't do that unless you explicitly tell it to.

    I have read of there being issues with virtual btrfs “disks” on improper shutdown of the machines.

    If you shutdown a VM properly, there should be no differences between btrfs on a VM vs physical machine.

    omv 5.6.13 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.2 | kvm plugin 5.1.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Quote

    No it won't. You just can't create a btrfs array from OMV's web interface. There may be a guide out there that tells you to use mdadm but it won't do that unless you explicitly tell it to.

    You may be correct. I never read it in a guide, but the first time I set raid up (OMV5), btrfs was an option presented for the file system, which I selected, and it installed without a problem. However what I got was an mdadm raid array with a btrfs file system.


    I was sober when I did it.

  • I never read it in a guide, but the first time I set raid up (OMV5), btrfs was an option presented for the file system, which I selected, and it installed without a problem. However what I got was an mdadm raid array with a btrfs file system.

    You must have created an array in the Raid tab and then created a btrfs filesystem on the array in the Filesystems tab. Since the Raid tab can only create mdadm arrays, that is to be expected.

    omv 5.6.13 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.2 | kvm plugin 5.1.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • That said, this is for testing so it looks good, except for your scrub command. Scrub normally runs in the background. Scrub status gives a point in time status. Since -B prevents running in the background there is not much point in getting the status after it has stopped. I would suggest using the scripts mentioned in the link I provided.

    So background scub is enabled by default? But when initiated manually (or via cronjob) the -B option must be removed?

    That github page lists a lot of options. Some are not applicable regarding a nas right? Will play around with it later this day and share my findings.


    Quote

    I haven’t used portainer so maybe I am missing something and the two disks with btrfs are not virtual? In any case, scrubbing and balancing, as well as modifying most btrfs options, are done live; you do not have to unmount or stop using them.



    I my test setup they are virtual now, but soon in my real nas it will be real disks. So it won't be a problem when writing data to the raid1 by bittorrent container while maintenance (scrub/balance) is running (either a automatic background or a manual start?).

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!