Best configuration for media files?

  • Hi everyone.
    Since I recieved a new HDD I can change my configuration. Right now I have 2 Raid 1 (1 ext4, 1 zfs).
    I was thinking to delete the raid1 and have a sync job that will run at night. I keep nearly only media files on my nas and I don't read/write too much.
    So this how it will become:
    - ext4 RAID1 -> HDD1 (main) will sinc with HDD2 (backup) at night
    - ZFS mirror -> HDD3 (main) rill sync with HDD4 (backup) at night


    Do you think that ZFS mirror or ext4 raid 1 are better?
    Also, in case I will change the configuation, I was thinking if I should change the FS to everything ZFS or BTRFS. What do you suggest considering that 99% of the file are media?


    Thanks in advance!

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    Well my wish list for the new year is to do something similar, scrap Raid option completely and for starters have 2 drives 1 to hold all media and the second setup to rsync from the first, but spin down the second drive when not in use.
    Once that's done install 2 more drives for data and image backup for the rest of the machines in the house. On both options I shall have a single 'root' share with all relevant folders inside, hopefully it will be easier to manage.

    • Offizieller Beitrag

    I nixxed Raid1 several years ago and couldn't imagine going back to it. I find rsync far superior.


    I usually set mine .. "Disk_1" is the disk that basically houses everything for services... "Disk_2" is the mirror... Set an rsync job to run once a day automatically. I don't use the delete trigger... usually I log in once or twice a month and run the job manually with the delete trigger enabled. This way, if I ever accidentally delete something from "Disk_1".. it's a simple matter of going into "Disk_2" via command line (since I don't set it up with any services other than rsync) and copying it back to "Disk_1". This has saved my bacon a couple of times.


    Way way better than Raid1, IMO (at least in a home server setting.. maybe there's a reason on the enterprise level to use Raid1)

    • Offizieller Beitrag

    On a fresh install, I would probably opt for btrfs due to some nice features like snapshots, compression and checksum/scrub. However, at least snapshots and compression are not that useful on media data that do not change that often and are probably already in a compressed format.

    • Offizieller Beitrag

    I also got rid of raid in favor of rsync to multiple locations. Very reliable.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    My biggest concern was this ticket on github:

    One of my rsync drives is btrfs and I have never had any problems with it. Running btrfs in a non-raid setup should be safe but personally, I still feel more comfortable with ext4 in a bad crash. So, I will try to make sure that ext4 still works on OMV 5.x even if it requires a plugin to be written.


    That said, if you have an rsync setup, it wouldn't be hard to wipe one drive, format btrfs, and rsync to the newly formatted drive. Then do it again for the other drive. It might take a long time but is possible. -OR- btrfs-convert is still around.

  • First of: thanks for planning an eventual ext4 plug-in! I too think that ext4 feel safer when worst case scenario comes.
    After reading online I think I'll go with ext4. As you said is still easy to switch to btrfs in case since I'll use rsync instead of RAID

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    I have considered btrfs for bitrot protection, but without some form of redundancy, raid 1 for instance, I don't believe btrfs can protect against bitrot, only detect it. Then you would have to manually restore from backups?


    Otherwise I second ext4 and versioned snapshots to other media using rsync. But I worry about bitrot...

    • Offizieller Beitrag

    I have considered btrfs for bitrot protection, but without some form of redundancy, raid 1 for instance

    Outside of a mirror (RAID1), BTRFS or ZFS can't recover from checksum errors, only detect them. The only real options for self healing files is RAID1 in ZFS or BTRFS, or a ZFS filesystem (the equivalent of a root folder) with copies = 2. This option duplicates files. I've tested a ZFS mirror - it recovers bitrot. I don't know, for sure, if BTRFS works in the same manner but one would assume that it does.


    For a rough equivalent of a RAID5 array, that has bitrot protection, MergerFS+SNAPRAID is the only thing available. I haven't actually tested SNAPRAID bitrot recovery but plan to do so in the near future. If it works as advertised, SNAPRAID would require the least disk real-estate for bitrot, file and disk recovery which is why I find it interesting.
    __________________________________________


    I don't believe btrfs can protect against bitrot, only detect it. Then you would have to manually restore from backups?


    Even along these lines, BTRFS seems to be a mixed bag:


    I had file errors detected by BTRFS and had the same idea you mentioned. I thought I'd manually restore the corrupt files from backup. (Which, with solid backup, would do away with the RAID1 requirement.) However, try as I might, I couldn't find a method of determining the "names" of the files that were corrupted. Without file names for reference, to delete and recover from backup, knowing that file errors exist is nearly useless. I dumped the drive, examined it (no reallocated sectors or other issues were noted) and rewrote it entirely.


    Then I ran into an issue after a power outage that outlasted the UPS's, that ended with an inaccessible BTRFS file system. For a CoW filesystem, that shouldn't have issues with data loss in a power outage, this was a serious disappointment. To be fair, I don't know if data was actually lost but that's irrelevant if the internal transaction logs get out of sync, with the final end result being that the file system becomes inaccessible. (I didn't say "read only", I mean inaccessible.) If users decide to go with BTRFS, in my experience, a UPS is not an option AND setting up NUT for an auto-shutdown seems to be a requirement. Also, solid backup goes without saying.


    I'm hoping that ZFS, as an option or plugin , is supported in OMV5. At this point, even for a single disk, I think I'd go with a ZFS basic volume rather than trust BTRFS. BTRFS just doesn't seem ready and BTRFS command line tools need a bit of work. (As an example, let's start with a way to list the names of files that are found to have checksum errors in a scrub.)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!