mdadm RAID5 + ext4, or btrfs, OR... ZFS ZRAID1 on older hardware.

  • Admins,


    Please move to forum / storage / general. I was in the wrong sub to post this...


    As I wait for the remaining items to arrive for my NAS build, my thoughts turn to filesystems... My hardware config now is as follows...


    Rosewill Helium NAS ATX case

    Antec Truepower 450 PSU

    MSI 970A-G43 mainboard

    AMD FX 8370 4.5ghz 8 core processor

    32GB DDR3 (max capacity)

    1TB SATA SSD boot drive

    MSI Geforce GTX-710 graphics with PCIe x1 to x16 riser

    Inspur 9207-8i SAS / SATA HBA SFF-8087 with SATA breakouts

    10GTek Intel X540-BT1 chip PCIe 10Gb NIC

    1@ 3 TB Seagate HDD

    8@ 2 TB Seagate HDDs. (Total of 14TB RAID5 and 3TB single drive).


    At some point down the road I want to replace all the spinning disks with 4TB SATA III 6Gb/s drives but not in my budget right now...


    Obviously I can't pair up the 3TB HDD with the 2TB HDDs in a RAID group due to the capacity mismatch.


    I am thinking of either setting up the 2TB drives as mdadm RAID5 group with ext4, OR ZFS ZRAID1


    This NAS is to be used for backups, housing videos, and shared storage for a cluster of Raspberry Pi 5s for the Docker swarm they will be set up as...


    Not looking for raging fast, but 1990s era floppy drive performance just won't do the deed either...


    My requirements are first and foremost data integrity.

    Secondly maintain data confidentiality. Working on securing it, and keeping my network unavailable to outside influences. Not terribly worried about local compromises.

    Lastly maximize space utilization efficiency via compression and deduplication where possible.


    System is OMV 7 with the following plugins.

    omvextrasorg

    borgbackup

    clamav

    kernel

    lvm2

    md

    nut

    resetperms

    zfs


    Learning HOW but the intent is to set up an S3 glacier sync with my backup / backup this whole thing to glacier storage for offsite potential disaster recovery...


    Given the hardware, what would be the best way of configuring the given storage?

    OpenMediaVault 7 6.14.5 Plugins clamav, md, omvextrasorg running on a new NAS box a mix of new parts with recycled hardware.

    Rosewill Helium NAS ATX case, Antec Truepower 450 Power Supply, MSI 970A-G43 mainboard, AMD FX 8370 8 core processor, 32GB DDR3, 1TB SATA SSD, MSI Geforce GTX-710 graphics with PCIe x1 to x16 riser, Inspur 9207-8i SAS / SATA HBA SFF-8087 with SATA breakouts, 10GTek Intel X540-BT1 chip PCIe 10Gb NIC, 1 3 TB, 8 2 TB HDDs. (Total of 14TB RAID5 and 3TB single drive).

    Edited once, last by dbhosttexas ().

    • Official Post

    The first question you should ask yourself is why you're looking for a RAID. If the reason is to have a backup, it's a mistake. RAID is not a backup. https://www.raidisnotabackup.com/


    If after that, you still think you need a RAID for all your data, you should keep in mind that a RAID 5 with 8 disks isn't a good idea; for that number of disks, a RAID 6 would be more appropriate.


    If you come to the conclusion that you don't need a RAID (or maybe not for all your data, but only for critical data), you could consider merging disks with mergerfs. You can create two pools and dedicate one of them to real backups, synchronizing regularly with rsync automatically. Or better yet, set up a dedicated backup app to take advantage of all its benefits.

  • The first question you should ask yourself is why you're looking for a RAID. If the reason is to have a backup, it's a mistake. RAID is not a backup. https://www.raidisnotabackup.com/


    If after that, you still think you need a RAID for all your data, you should keep in mind that a RAID 5 with 8 disks isn't a good idea; for that number of disks, a RAID 6 would be more appropriate.


    If you come to the conclusion that you don't need a RAID (or maybe not for all your data, but only for critical data), you could consider merging disks with mergerfs. You can create two pools and dedicate one of them to real backups, synchronizing regularly with rsync automatically. Or better yet, set up a dedicated backup app to take advantage of all its benefits.

    No, RAID is not a backup, never has been, never will be. It IS however protection against a failed disk which is what I am aiming for...


    Which is why I am wanting to have OFFSITE backups to an S3 glacier bucket...


    RAID5 can handle arrays up to 16 disks I believe, with less of an impact to performance, offsite backup of the contents of the array gives me redundancy of protection. One drive fails, replace the disk and rebuild the RAID group. two drives fail (at the same time?) replace, re-establish the RAID group and restore from backup.


    With the parity of raid you effectively lose the capacity of one, or more drives, so with a group of 8 @ 2TB drives, that means a total physical capacity of 16TB (rounded down because 2TB isn't really 2TB), minus 2 TB for RAID5, or minus 4TB for RAID6. Which means with RAID5 I end up with 14TB, with RAID6 I end up with 12TB...


    Unless of course someone wants to donate 2 more 2TB SATA HDDs to my project, then I could have 16TB of storage in RAID6 or ZRAID2...


    This does have a small business application yes, but it is NOT mission critical. My laptop and desktop PCs are business critical. I can tolerate some downtime on the NAS in an emergency. (NONE of my client facing infrastructure is self hosted)...


    As I am more or less crash coursing on OMV, the eventual desired setup is solid in my mind, just need to figure out how to get there...


    From my initial post...

    "My requirements are first and foremost data integrity." Meaning build in some sort of at least singular hardware fault tolerance to the storage. PLUS offsite backup. This is disaster recovery territory here, but also not RIGHT NOW disaster recovery. With what I do, I can tolerate taking up to a week for disaster recovery. Longer if needed by total facility loss due to flood, fire, or being wiped off the map by a hurricane. This is why I am wanting to run borgbackup from my client hosts to shared storage on the NAS, and then sync the NAS including the borgbackups to an S3 Glacier bucket.


    RAID5 simply protects the integrity of the "disk" itself, meaning a failure of one disk does not constitute a failure of the entire RAID group and thus volume. It does NOTHING to protect against multi disk failure, destruction of the facility where it is housed etc... That is where offsite backups come into play, that is where S3 Glacier comes into play. This is not how we did it in a prior job where Netbackup server at location A would have an offsite mirror to location B in a separate region, say Texas and New Jersey, so if the New Jersey facility has a disaster hit it, Texas has the data, if Texas gets wiped by a hurricane New Jersey has the data. Plus proper backups give you failback for when a user (or adminstrator) does something stupid and deletes the wrong data and needs an archival copy to keep working.


    "Secondly maintain data confidentiality. Working on securing it, and keeping my network unavailable to outside influences. Not terribly worried about local compromises." So network monitoring and a good firewall...


    "Lastly maximize space utilization efficiency via compression and deduplication where possible." This is and my first priority is where I need to compromise and give RAID5 a good long look... And it is why I am looking at ZFS, which due to hardware limitations limits how much storage I can actually have...


    Not sure where the conclusion that RAID5 on 8 disks isn't a good idea but RAID6 is... What is the logic to that statement? I know the most common RAID5 configuration is 4 disks,


    Mind you, I am aiming for at least 14TB usable in this array. Having said that, I would happily pull the 3TB mismatched drive, in favor of 2 more 2TB drives for a total of 10, and configure RAID6 as it would reach my target capacity... Anyone want to donate? (Just kidding).


    I am building this NAS out of repurposed basically e-waste because of budget limitations. This is not an ideal situation or project, but it is better to run what ya brung as it were... than not run at all.


    The whole intent here is to centralize things commonly needed across multiple devices so storage for my Raspberry Pi Docker Swarm cluster for example, or less commonly but semi frequently accessed data, such as older audio and video assets for my video projects etc... that I don't want gobbling up the limited drive space on the laptops in particular. (The desktop / mini PC has a 2TB nvme and really is no issue, my laptop however, 1TB and I keep filling it...)

    OpenMediaVault 7 6.14.5 Plugins clamav, md, omvextrasorg running on a new NAS box a mix of new parts with recycled hardware.

    Rosewill Helium NAS ATX case, Antec Truepower 450 Power Supply, MSI 970A-G43 mainboard, AMD FX 8370 8 core processor, 32GB DDR3, 1TB SATA SSD, MSI Geforce GTX-710 graphics with PCIe x1 to x16 riser, Inspur 9207-8i SAS / SATA HBA SFF-8087 with SATA breakouts, 10GTek Intel X540-BT1 chip PCIe 10Gb NIC, 1 3 TB, 8 2 TB HDDs. (Total of 14TB RAID5 and 3TB single drive).

    Edited once, last by dbhosttexas ().

    • Official Post

    So much explanation wasn't necessary. If you've got everything so clear, you don't need much advice. Good luck.

  • Well, I didn't read/understand everything, but think, you should think of installing kernel plugin and proxmox kernel and use its native zfs.

    BTW: With eight HDDs I'd not use RAID5/ZFS-1 any more.


    Best regards

    BigBackup-NAS: Qnap TS-653D with J4125 SoC (4C/4T), 2x 2.5Gb LAN, HDMI and 16GB RAM + upto 6x14TB + OMV 7.7.13 with proxmox kernel 6.14.8 booting from a NVMe in PCIe x2 extenstion slot

  • Well, I didn't read/understand everything, but think, you should think of installing kernel plugin and proxmox kernel and use its native zfs.

    BTW: With eight HDDs I'd not use RAID5/ZFS-1 any more.


    Best regards

    BigBackup-NAS: Qnap TS-653D with J4125 SoC (4C/4T), 2x 2.5Gb LAN, HDMI and 16GB RAM + upto 6x14TB + OMV 7.7.13 with proxmox kernel 6.14.8 booting from a NVMe in PCIe x2 extenstion slot

  • So much explanation wasn't necessary. If you've got everything so clear, you don't need much advice. Good luck.

    Trying to wrap my head around the newer technologies... Never used ZFS before... Seems to have advantages... Just not sure the overhead it puts on the system is worth it...

    OpenMediaVault 7 6.14.5 Plugins clamav, md, omvextrasorg running on a new NAS box a mix of new parts with recycled hardware.

    Rosewill Helium NAS ATX case, Antec Truepower 450 Power Supply, MSI 970A-G43 mainboard, AMD FX 8370 8 core processor, 32GB DDR3, 1TB SATA SSD, MSI Geforce GTX-710 graphics with PCIe x1 to x16 riser, Inspur 9207-8i SAS / SATA HBA SFF-8087 with SATA breakouts, 10GTek Intel X540-BT1 chip PCIe 10Gb NIC, 1 3 TB, 8 2 TB HDDs. (Total of 14TB RAID5 and 3TB single drive).

  • Well, file system discussion is similar to OS discussion: There are systems coming from the high end and went down, and systems basing on small computers and grew up with their facilities. ZFS is one of the former - brought down especially by (Free)BSD - and md with ext4 one of the latter. Btrfs is a later development inbetween, trying to bring the best of ZFS down to smaller hardware.


    During the past 10 years I used all three FS and can tell, that FreeBSD implementation of ZFS was my favorite - until FreeNAS (today TrueNAS) moved from FreeBSD to debian linux (sic!) and the branch NAS4Free (today XigmaNAS) still does not really support Multi-Gb LAN. And btrfs users (Synology and Rockstor) still warn about using it with RAID5 or 6 configurations without an UPS...


    So when I got the hardware below, and saw, that latest proxmox rely on ZFS, too, I decided to go back to OMV, of which I used versions 2 and 3 already, and use the proxmox 8 kernel with it, that supports ZFS natively, rather than as the addon in former versions.


    It looks to be a good decision. Even if ZFS not seems to allow reducing the pool size yet like md or btrfs. But this is just a need to make sure about, what I really want. I.e. if I really want to accept possible break-down of my whole 6-HDD-RAID5/ZFS-1, if I need to replace one of the disks, or if I rather setup a RAID6/ZFS-2, which still allows a second disk to give up while resyncing the RAID...


    BTW: That was, what I meant, when I wrote, that I would not use RAID5/ZFS-1 with eight HDDs no more.

    BigBackup-NAS: Qnap TS-653D with J4125 SoC (4C/4T), 2x 2.5Gb LAN, HDMI and 16GB RAM + upto 6x14TB + OMV 7.7.13 with proxmox kernel 6.14.8 booting from a NVMe in PCIe x2 extenstion slot

    • Official Post

    Trying to wrap my head around the newer technologies... Never used ZFS before... Seems to have advantages... Just not sure the overhead it puts on the system is worth it..

    Without dedup, encryption, etc., there's very little overhead to speak of. (These options are not on, by default.)

    I'm running a plain RAIDZ1 on an older Atom CPU. It's doing fine.

  • I know md RAID is rock solid, at least in my experience. No not as good as say a PERC, but I am not putting this together using PowerEdge hardware... I am scrubbing together something from effectively an e-waste pile... And EXT4 utterly lacks deduplication which I am not certain will be a huge help but let me give you a rundown.


    Right now my CasaOS Raspberry Pi 4 has 1.3TB of 1.8TB available (2TB rounded down) in use between OS, and my own video and other archived assets. Both mine and my GFs laptops have 1TB drives that are about 75% full, my desktop workstation has a 2TB NVME about 1.4TB on that, my Rpi-5 16B Linux host has a 4TB NVME with about 1.9TB on it... and there is a lot of the same data on all of them. Common video clips, family photos, etc...


    I do a lot of video / audio work, that data gets big. Ish...


    I would like to not have to redo this NAS in the next couple of years at least... So trying to do this as right as I can, with what I have or can get on the cheap...

    OpenMediaVault 7 6.14.5 Plugins clamav, md, omvextrasorg running on a new NAS box a mix of new parts with recycled hardware.

    Rosewill Helium NAS ATX case, Antec Truepower 450 Power Supply, MSI 970A-G43 mainboard, AMD FX 8370 8 core processor, 32GB DDR3, 1TB SATA SSD, MSI Geforce GTX-710 graphics with PCIe x1 to x16 riser, Inspur 9207-8i SAS / SATA HBA SFF-8087 with SATA breakouts, 10GTek Intel X540-BT1 chip PCIe 10Gb NIC, 1 3 TB, 8 2 TB HDDs. (Total of 14TB RAID5 and 3TB single drive).

    • Official Post

    Traditional RAID has been around for awhile and it works. But the original reasons for it, at the beginning, have largely been overtaken by events. They were:

    - Aggregating relatively small drives to create larger scale storage. (Today's drives are huge.)

    - Increasing throughput buy using parallel reads and writes. (Today, a single SATA consumer drive, which tend to be the slowest, can saturate a 1GB network connection.)
    Traditional RAID still suffers from the "write hole" where data may be irretrivealbly lost, during a write, if power is cut (meaning an UPS really should be used with an MDADM server) and it can silently corrupt data (there's no mechanism to protect against this).

    No not as good as say a PERC, but I am not putting this together using PowerEdge hardware...

    Hardware RAID has it's down sides as well:

    - If the server dies, the RAID drive set can't be moved to new hardware without using the same controller (or at least the same family of controller). The same applies if the controller dies - you're looking for a new controller or, if using older e-waste, a replacement on E-bay.

    If using mdadm software RAID, a RAID set can be transferred to new hardware.
    ____________________________________________________________________________________________

    Right now my CasaOS Raspberry Pi 4 has 1.3TB of 1.8TB available (2TB rounded down) in use between OS, and my own video and other archived assets. Both mine and my GFs laptops have 1TB drives that are about 75% full, my desktop workstation has a 2TB NVME about 1.4TB on that, my Rpi-5 16B Linux host has a 4TB NVME with about 1.9TB on it... and there is a lot of the same data on all of them. Common video clips, family photos, etc...

    This sounds, to me, like you have a LOT of irreplaceable data. If I were you, I'd be looking for two things:


    - Data Integrity (ZFS, BTRFS or SnapRAID&mergerFS).

    - Solid backup (that is easily restoreable).

    If you have a lot of e-waste to work with, hardware shouldn't be a problem. Storage space is another matter.

    I'm running and testing all of the above (ZFS, BTRFS and SnapRAID) anecdotally, but the only one I truly trust is ZFS. BTRFS still has issues that the project will admit to and it has some issues with snapshots, with deep directory structure and lots of little files. (At least in times past.)
    SnapRAID&mergerFS works well with USB connected drives and low powered hardware but it lacks in versioned backup / restorations. If dropping back for an older version of files, folders or an entire hard drive, you have one choice - the state of the last sync.

    My personal recommendation, for the best possible file integrity and versioned snapshot flexibility, using a filesystem that is time tested and mature, would be ZFS using VDEV's of zmirrors. The reasons why are stated in this -> doc.

    Since additional hardware is not an issue; backup (and very easy restores) can be done, simply, by -> Rsync'ing data to local storage media or to another platform.

  • I am going to try to answer this in manageable pieces. Grab some coffee and settle in, this is going to be long. Here goes nothing...

    I am used to what I would consider traditional fairly conservative technologically organizations, not on the bleeding edge, more like on the decaying edge... The kind of shops that use RHEL because it is stable AND industry supported, on hardware configurations that more or less haven't seen considerable architecture changes in 25 years. This is a bit of an exaggeration, but hopefully you get the idea...


    I am familiar with the risks of hardware RAID, had plenty of PowerEdge servers that lost a mainboard with integrated PERC and had to carry over to a new mainboard. The configs are on the disk group, but have to be recovered when the new hardware gets initially set up... Not awful, not fun.


    Since I am using e-waste effectively, new big drives are not in the budget, nor will they be for a good while... HOWEVER, It is the plan long term to swap the many drive RAID group for even a mirror of BIG drives and call it good. I know going to a new RAID group is going to be tough, not impossible, but tough... My ideal / ultimate goal would be 32TB of RAID6. That meets the max RAM for my host and the requirements of ZFS plus my potential data needs. 10 4TB disks, or 6 8TB disks in RIAD6 would do the job. .


    Write hole / power loss issue is a consideration. I DO have a UPS, and a wonky power cable, or had a wonky power cable, dug a fresh tight fitting cable out of my spares boxes... I have fresh batteries in my UPS and it is set up with NUT, and tested, no problem. My ONLY devices connected to this UPS is the NAS box. The other UPS holds the Raspberry Pi s, the ONT, router, switch and VoIP gateway. (combined roughly the same wattage as the NAS) Need to figure out how to get NUT to work on one of the Pis and use it to share a loss signal to nut on the other nodes. The rest can run to failure.


    The 6Gb/s speeds of SATAIII does not exceed the 10GBase-T network on my NAS or the SFP module in my switch, but it DOES saturate the gig, and 2.5 gig clients so not exactly worried about how fast the disks are, yet...


    I do not have a hardware RAID except the one build onto the mainboard that CAN do RAID 0, or 1. I used to have it set up for a RAID1 mirror and it saved my tail when one disk failed...


    Most likely MDADM RAID is what I am looking at doing, no added hardware needed. Can be moved to a replacement host if needed.


    Backup is going to be to S3 Glacier storage. Offsite, and affordable enough. Although I am open to considering other options that are more cost effective..


    Something you didn't mention and I clipped from the quote was when talking about backups, you didn't mention offsite. I am in the Texas gulf coast, in evacuation zone A, meaning when an evacuation order is issued due to a hurricane, the first people that are made to leave is my community... Getting wiped off the map entirely is a distinct possibility, which is why I am wanting offsite, or at the very least VERY mobile backups. A 20TB USB 3.0 HDD would be more than fine to back up to and snag off into the campervan if I have to evacuate... This does not address more sudden disasters though, tornado, fire etc... Which puts me back to cloud storage as my best option...


    I know from what I have tried so far, ZRAID is a pain to re-establish when moving to new hardware, not sure if that was an issue with ZRAID, or an issue with the hardware as my attempts were with the eSATA controller and enclosure. Just FWIW, avoid eSATA if you can... what a pain!


    So it looks like at this point I am aiming at doing this either smartly or incredibly stupidly but here goes.


    • Single source PSU since it is what I have, but at least UPS protected. UPS has fresh batteries and NUT configured / tested. It DOES power off after 30 seconds with a power loss. This is what I want.
    • mdadm RAID5 group. Some will say I should do RAID6 with 8 drives, the documentation doesn't support that. It could be just an abundance of caution which is good, I am doing a budgetary balancing act. If I could stab 2 more 2TB disks in here which is not in the budget right now. I would definately do RAID6...
    • ZFS on TOP or mdadm RAID5.
    • The 2TB external storage devices (3 of them) previously discussed will hold deep archive stuff. tarballs of my old audio / video project assets, tax filings that sort of stuff. Keep that in my laptop bag. Label the external HDDs 1 and 2 to keep them straight, and label the NVME 3 as I have a 4TB NVME in the same style enclosure that is already in my laptop bag that holds current project asset files and personal entertainment files (nothing spicy, my GF is sufficient for that...)
    • borgbackup TO the NAS from my networked clients.
    • And here is where I am going to be asking about configs / documentation. How to set up the S3 plugin and backup the NAS to S3 or any other cloud archival storage provider.

    On the amount of e-waste issue. WIthout going into the how and why, it was an orphaned project due to life getting in the way, but here goes.

    Outside of the NAS box as assembled now I have...

    • Antec Sonata Mid tower case, original model from 2006 I think, this has housed a couple of builds.
    • Antec TruePower 380 PSU.
    • 2 @ Diablotek 450w PSUs. (The Diablotek cases are going to the recyclers as I type...)
    • MSI 970A-G43 mainboard, may be faulted, may just have a bad CMOS battery, need to test.
    • AMD FS-8350 CPU and cooler.
    • 2 @ MSI 970A-G46 mainboards.
    • 2 @ AMD Sempron 190 dual core 2.5Ghz with coolers.
    • 3 @ Corsair Vengence 32GB DDR3 kits (4x8GB).
    • Brand unknown DDR3 16GB DDR3 kit. (4x4GB).
    • 3 @ MSI Radeon PCIe x16 cards. Particulars unimportant, just that it is there... And these do NOT like the x1 to x16 adapter riser!
    • 3 @ brand unknown and not really relevant SATA internal DVD/RWs....

    To match my current NAS box I would need the case, LSI HBA, 10 gig ethernet card, and disks... Which may happen down the road as a second NAS box when I can afford the bigger disks...


    Things I have discovered lately about this hardware.

    • eSATA hardware actually worked reasonably well with the original and now lost Silicon Image controller and running CentOS 6. Anything more recent has given me issues. Giving up on it.
    • Gigabit ethernet isn't really as fast as you would hope when you are shoving large amounts of data around your network.
    • Gigabytes and Terabytes are like money, the more you have, the more it seems you need.

    OpenMediaVault 7 6.14.5 Plugins clamav, md, omvextrasorg running on a new NAS box a mix of new parts with recycled hardware.

    Rosewill Helium NAS ATX case, Antec Truepower 450 Power Supply, MSI 970A-G43 mainboard, AMD FX 8370 8 core processor, 32GB DDR3, 1TB SATA SSD, MSI Geforce GTX-710 graphics with PCIe x1 to x16 riser, Inspur 9207-8i SAS / SATA HBA SFF-8087 with SATA breakouts, 10GTek Intel X540-BT1 chip PCIe 10Gb NIC, 1 3 TB, 8 2 TB HDDs. (Total of 14TB RAID5 and 3TB single drive).

  • HBA arrived. Hardware configuration now 100% complete.

    mdadm RAID group being built. This is going to take some time to do what it needs to do, but that's fine. No rush.

    OpenMediaVault 7 6.14.5 Plugins clamav, md, omvextrasorg running on a new NAS box a mix of new parts with recycled hardware.

    Rosewill Helium NAS ATX case, Antec Truepower 450 Power Supply, MSI 970A-G43 mainboard, AMD FX 8370 8 core processor, 32GB DDR3, 1TB SATA SSD, MSI Geforce GTX-710 graphics with PCIe x1 to x16 riser, Inspur 9207-8i SAS / SATA HBA SFF-8087 with SATA breakouts, 10GTek Intel X540-BT1 chip PCIe 10Gb NIC, 1 3 TB, 8 2 TB HDDs. (Total of 14TB RAID5 and 3TB single drive).

    • Official Post

    I did forget to mention that in some cases (many?) hardware RAID cards filter out SMART data, preventing it from passing through to the OS. For proactive drive monitoring, that's not good. For that reason alone, I flashed a Perc card to IT mode (JBOD) to get clear access to SMART data.

    I know from what I have tried so far, ZRAID is a pain to re-establish when moving to new hardware, not sure if that was an issue with ZRAID, or an issue with the hardware as my attempts were with the eSATA controller and enclosure.

    It must have been something to do with eSATA - I can't speak to that. However, I recently (as in a few days ago) imported a RAIDZ1 pool (the ZFS equivalent of RAID5) into a second OMV box that is distinctly different from the first.

    Due to a motherboard going bad; in the first box the drive array group was connected VIA a hardward RAID card (SAS or SATA) that was flashed to JBOD operation. In the second, the SATA ports were standard on the motherboard. In the OMV ZFS plugin, I used zpool import with -f flag (force). When the pool showed up, I clicked the Discover button and the pool's file systems showed up under Storage, File Systems. That, as they say, "was it".
    ________________________________________________

    The difference between tradtional mdadm (software) or hardware RAID and ZFS is substantial. All RAID does is aggregate disks and provide for drive restoration. For the exact same number of disks, in the RAID5 and 6 equivalents; ZFS will give you that along with snapshots that will preserve all the states of an individual file, as it previously existed for up to a year. This means if you have a video project going and you edited your way into a dead end, you could go back to the point in time where you went off the garden path. ((Here's a How-To doc for setting up fully automated, rotating and self purging snapshots -> zfs-auto-snapshot.))

    Again - the same number of disks, the same amount of storage, etc., but with restoreable previous file versions and actual data integrity, no write hole, and no silent bit rot as a drive becomes marginal and slowly begins to die. This is not something to take lightly, especially if you're using older disks that are housing irreplaceable data.

    But, there is something to be said about going with what you're comfortable and familiar with.


    As they say, it's your call.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!