Is my RAID 6 hosed? Any hope of data recovery?

  • Hi


    Have been running a 4 disk Raid 6 setup for over two years without any issues, until suddenly on June 27
    disks on my OMV NAS became 100% full, including the NFS mounted volumes in the raid set. I'd noticed major
    disk activity via gkrellm overnight, but foolishly didn't investigate before bed.


    Rsnapshot normally backs up two desktop machines onto the raid setup: the next morning found that one of
    the backup directories was not on raid but suddenly was on the root directory of the OMV machine, and
    being a total backup (several Gb) this accounted for the 100% reading for the OMV/NAS machine.


    The logs indicated that mdstat had discovered "dirty degraded array" presumably due to faulty sdb, so had
    withdrawn that disk, and then couldn't run the raid set (logs below show)


    Bought a new disk and installed on July 4, and raid rebuilt overnight (see July 5 Rebuild finished below)


    Since then I've been unable to mount or access any data. Have followed instructions as per Linux Raid Wiki's
    "Recovering a failed software RAID" & "RAID Recovery" , but still no success. I've attached the results of their
    suggestions in the attached log file "linux_raid_wiki_logs.txt".


    I've seem to have exhausted most possibilities of recovering my raid set and data, but I post this in the
    hope that someone out there can give me hope. All this research has indicated that I shouldn't have been
    using RAID 6 anyway, but a bit late for that. And needless to say was relying too much on the 'failsafe'
    nature of RAID and didn't also have adequate backups of valuable data. More fool I.


    Any help appreciated - even if it's just to tell me my RAID sets are hosed!!


    Cheers


    P.S. This line looks ominous? <md0: detected capacity change from 4000528203776 to 0> !!!


    ===============Jun 27 16:52:21 keruru kernel: [ 2.912440] md: md0 stopped.
    Jun 27 16:52:21 keruru kernel: [ 2.922315] md: bind<sdb>
    Jun 27 16:52:21 keruru kernel: [ 2.922508] md: bind<sdc>
    Jun 27 16:52:21 keruru kernel: [ 2.922643] md: bind<sde>
    Jun 27 16:52:21 keruru kernel: [ 2.922777] md: bind<sdd>
    Jun 27 16:52:21 keruru kernel: [ 2.922808] md: kicking non-fresh sdb from array!
    Jun 27 16:52:21 keruru kernel: [ 2.922820] md: unbind<sdb>
    Jun 27 16:52:21 keruru kernel: [ 2.927107] md: export_rdev(sdb)
    Jun 27 16:52:21 keruru kernel: [ 2.994973] raid6: sse2x1 588 MB/s
    Jun 27 16:52:21 keruru kernel: [ 3.062926] raid6: sse2x2 1395 MB/s
    Jun 27 16:52:21 keruru kernel: [ 3.130841] raid6: sse2x4 2397 MB/s
    Jun 27 16:52:21 keruru kernel: [ 3.130844] raid6: using algorithm sse2x4 (2397 MB/s)
    Jun 27 16:52:21 keruru kernel: [ 3.130846] raid6: using ssse3x2 recovery algorithm
    Jun 27 16:52:21 keruru kernel: [ 3.130866] Switched to clocksource tsc
    Jun 27 16:52:21 keruru kernel: [ 3.131227] xor: automatically using best checksumming function:
    Jun 27 16:52:21 keruru kernel: [ 3.170797] avx : 6164.000 MB/sec
    Jun 27 16:52:21 keruru kernel: [ 3.171121] async_tx: api initialized (async)
    Jun 27 16:52:21 keruru kernel: [ 3.172809] md: raid6 personality registered for level 6
    Jun 27 16:52:21 keruru kernel: [ 3.172812] md: raid5 personality registered for level 5
    Jun 27 16:52:21 keruru kernel: [ 3.172815] md: raid4 personality registered for level 4
    Jun 27 16:52:21 keruru kernel: [ 3.173218] md/raid:md0: not clean -- starting background reconstruction
    Jun 27 16:52:21 keruru kernel: [ 3.173236] md/raid:md0: device sdd operational as raid disk 1
    Jun 27 16:52:21 keruru kernel: [ 3.173239] md/raid:md0: device sde operational as raid disk 3
    Jun 27 16:52:21 keruru kernel: [ 3.173242] md/raid:md0: device sdc operational as raid disk 2
    Jun 27 16:52:21 keruru kernel: [ 3.173706] md/raid:md0: allocated 0kB
    Jun 27 16:52:21 keruru kernel: [ 3.173745] md/raid:md0: cannot start dirty degraded array.
    Jun 27 16:52:21 keruru kernel: [ 3.173811] RAID conf printout:
    Jun 27 16:52:21 keruru kernel: [ 3.173814] --- level:6 rd:4 wd:3
    Jun 27 16:52:21 keruru kernel: [ 3.173816] disk 1, o:1, dev:sdd
    Jun 27 16:52:21 keruru kernel: [ 3.173818] disk 2, o:1, dev:sdc
    Jun 27 16:52:21 keruru kernel: [ 3.173820] disk 3, o:1, dev:sde
    Jun 27 16:52:21 keruru kernel: [ 3.174025] md/raid:md0: failed to run raid set.
    Jun 27 16:52:21 keruru kernel: [ 3.174071] md: pers->run() failed ...
    ===============
    New disk added - sdb
    ===============
    Jul 5 21:06:18 keruru mdadm[2497]: RebuildFinished event detected on md device /dev/md0, component device mismatches found: 1847058224 (on raid level 6)
    Jul 6 09:45:52 keruru kernel: [ 1195.390879] raid6: sse2x1 249 MB/s
    Jul 6 09:45:52 keruru kernel: [ 1195.458735] raid6: sse2x2 476 MB/s
    Jul 6 09:45:52 keruru kernel: [ 1195.526632] raid6: sse2x4 839 MB/s
    Jul 6 09:45:52 keruru kernel: [ 1195.526638] raid6: using algorithm sse2x4 (839 MB/s)
    Jul 6 09:45:52 keruru kernel: [ 1195.526644] raid6: using ssse3x2 recovery algorithm
    Jul 6 09:45:52 keruru kernel: [ 1195.578970] md: raid6 personality registered for level 6
    Jul 6 09:45:52 keruru kernel: [ 1195.578980] md: raid5 personality registered for level 5
    Jul 6 09:45:52 keruru kernel: [ 1195.578985] md: raid4 personality registered for level 4
    Jul 6 09:45:52 keruru kernel: [ 1195.580003] md/raid:md0: device sdb operational as raid disk 0
    Jul 6 09:45:52 keruru kernel: [ 1195.580012] md/raid:md0: device sde operational as raid disk 3
    Jul 6 09:45:52 keruru kernel: [ 1195.580018] md/raid:md0: device sdd operational as raid disk 2
    Jul 6 09:45:52 keruru kernel: [ 1195.580025] md/raid:md0: device sdc operational as raid disk 1
    Jul 6 09:45:52 keruru kernel: [ 1195.581091] md/raid:md0: allocated 0kB
    Jul 6 09:45:52 keruru kernel: [ 1195.581180] md/raid:md0: raid level 6 active with 4 out of 4 devices, algorithm 2
    Jul 6 09:52:30 keruru kernel: [ 4.186106] raid6: sse2x1 602 MB/s
    Jul 6 09:52:30 keruru kernel: [ 4.254006] raid6: sse2x2 906 MB/s
    Jul 6 09:52:30 keruru kernel: [ 4.186106] raid6: sse2x1 602 MB/s
    Jul 6 09:52:30 keruru kernel: [ 4.254006] raid6: sse2x2 906 MB/s
    Jul 6 09:52:30 keruru kernel: [ 4.321957] raid6: sse2x4 1130 MB/s
    Jul 6 09:52:30 keruru kernel: [ 4.321964] raid6: using algorithm sse2x4 (1130 MB/s)
    Jul 6 09:52:30 keruru kernel: [ 4.321967] raid6: using ssse3x2 recovery algorithm
    Jul 6 09:52:30 keruru kernel: [ 4.368478] md: raid6 personality registered for level 6
    Jul 6 09:52:30 keruru kernel: [ 4.368486] md: raid5 personality registered for level 5
    Jul 6 09:52:30 keruru kernel: [ 4.368490] md: raid4 personality registered for level 4
    Jul 6 09:52:30 keruru kernel: [ 4.369179] md/raid:md0: device sdb operational as raid disk 0
    Jul 6 09:52:30 keruru kernel: [ 4.369185] md/raid:md0: device sde operational as raid disk 3
    Jul 6 09:52:30 keruru kernel: [ 4.369189] md/raid:md0: device sdd operational as raid disk 2
    Jul 6 09:52:30 keruru kernel: [ 4.369194] md/raid:md0: device sdc operational as raid disk 1
    Jul 6 09:52:30 keruru kernel: [ 4.369974] md/raid:md0: allocated 0kB
    Jul 6 09:52:30 keruru kernel: [ 4.372062] md/raid:md0: raid level 6 active with 4 out of 4 devices, algorithm 2
    Jul 6 12:56:15 keruru kernel: [ 4.442184] raid6: sse2x1 739 MB/s
    Jul 6 12:56:15 keruru kernel: [ 4.510060] raid6: sse2x2 1480 MB/s
    Jul 6 12:56:15 keruru kernel: [ 4.577985] raid6: sse2x4 1605 MB/s
    Jul 6 12:56:15 keruru kernel: [ 4.577993] raid6: using algorithm sse2x4 (1605 MB/s)
    Jul 6 12:56:15 keruru kernel: [ 4.577997] raid6: using ssse3x2 recovery algorithm
    Jul 6 12:56:15 keruru kernel: [ 4.622570] md: raid6 personality registered for level 6
    Jul 6 12:56:15 keruru kernel: [ 4.622577] md: raid5 personality registered for level 5
    Jul 6 12:56:15 keruru kernel: [ 4.622580] md: raid4 personality registered for level 4
    Jul 6 12:56:15 keruru kernel: [ 4.623261] md/raid:md0: device sdb operational as raid disk 0
    Jul 6 12:56:15 keruru kernel: [ 4.623266] md/raid:md0: device sde operational as raid disk 3
    Jul 6 12:56:15 keruru kernel: [ 4.623269] md/raid:md0: device sdd operational as raid disk 2
    Jul 6 12:56:15 keruru kernel: [ 4.623273] md/raid:md0: device sdc operational as raid disk 1
    Jul 6 12:56:15 keruru kernel: [ 4.624064] md/raid:md0: allocated 0kB
    Jul 6 12:56:15 keruru kernel: [ 4.624131] md/raid:md0: raid level 6 active with 4 out of 4 devices, algorithm 2
    Jul 6 16:54:43 keruru kernel: [14401.858429] md/raid:md0: device sdb operational as raid disk 0
    Jul 6 16:54:43 keruru kernel: [14401.858442] md/raid:md0: device sde operational as raid disk 3
    Jul 6 16:54:43 keruru kernel: [14401.858449] md/raid:md0: device sdd operational as raid disk 2
    Jul 6 16:54:43 keruru kernel: [14401.858455] md/raid:md0: device sdc operational as raid disk 1
    Jul 6 16:54:43 keruru kernel: [14401.859915] md/raid:md0: allocated 0kB
    Jul 6 16:54:43 keruru kernel: [14401.860000] md/raid:md0: raid level 6 active with 4 out of 4 devices, algorithm 2

    • Offizieller Beitrag

    I've been looking at your thread for awhile now and, while I had my doubts, I waited for one of the guru's to chime in (with a potential miracle cure). Usually, there would be a response by now - so,,, I'll say it. I don't think you're going to be able to resurrect the array. While a single disk failure is "usually" no big deal, each level of intervention (beyond adding a clean and wiped disk) goes a bit farther down the RAID rabbit hole.


    (If I have the sequence of events right:)
    While I didn't analyze the very minute details, it appeared as if you got the repair sequence correct.
    However, after attempting mdadm --create --assume-clean --level=6 --etc., etc.,
    and, thereafter, getting mount messages about a bad Superblock, bad file system, etc.,,, I think that's pretty much "it".


    _________________________________________________________


    So my next question would be, do you have back up? Many on this forum seem to think RAID is backup and that it keeps their data safe. As I think you're going to find, it doesn't. A full data backup is far better than RAID will ever be. It doesn't have to be expensive either. To get an idea of how I do it, look at my signature below.


    If I were to consider using RAID, I think it would be a BTRFS RAID 1 set, but I wouldn't do it for disk redundancy or availability. BTRFS has a unique feature where it assigns file check sums and scrubs for "bitrot". With a RAID1 set, BTRFS can repair a completely corrupted file, from the copy with a healthy check sum on the second disk. This is good protection from silent data corruption. Even with that, and in any case, I would still run backup servers.


    My Regrets

  • Hi flmaxey


    Thanks for your commiserations. Much appreciated.


    I'd hoped some omv Raid guru might have given hope, and a suggestion for recovery, but like you I guess I'd already suspected the worst. One last hope is to post today an abbreviated account to the Linux-Raid list and see what transpires.


    If I hear anything of note I'll post here in case it's of help to others.


    Meanwhile, in answer to your question, no current backup. I've been religiously doing backups since acquiring my first desktop machine in 1983, and latterly using rsnapshot/rsync. A couple of years ago I just transferred these over to the Raid array, foolishly ignoring all warnings re ensuring separate backups, thinking smartctl and Raid 6 would be fail-safe! No such thing, of course ...


    Again, many thanks. Back to ddrescue & photorec!

    • Offizieller Beitrag

    If I hear anything of note I'll post here in case it's of help to others.

    Yes, please follow up with an anecdotal account, even if nothing works. And if you manage to piece that array together again, I hope you'll post at least an overview of how you did it.


    Good Luck

  • Can someone explain what exactly went wrong in this case and how can others avoid that?

    What went wrong? Unfortunately playing RAID while not thinking about backup.


    How to avoid that? Don't trust in adding redundancy would improve data 'protection' or whatever illusions people associate with RAID all the time for no reason.


    RAID is about availability and nothing else. You get no other 'protection' than maybe still having your data online if 1 (or even 2 disks with raid6) are failing HARD (they don't do this usually, instead they die slowly and then you're out of luck with 'traditional RAID' anyway). If you want to play RAID then test through everything that can go wrong (unfortunately that's not possible since without decades of experiences made with failing RAIDs you can not even imagine what can go wrong).


    IMO the only 'protecting' RAID modes that make some sense at home are ZFS/zmirror or btrfs raid-1. Only when combined with periodic snapshots sent to another disk (or even better another device somewhere else) and periodic scrubs. And testing a disk failure every few months. Only then you get some protection (snapshots especially when also sent to another disk/location make up for some backup functionality and when done correctly also provide 'desaster recovery'). Some more thoughts here.


    TL;DR: Avoid RAID at home, do backups instead!

  • So you actually don't know what happened?

    What happened is that unfortunately just another OMV user made the huge mistake to confuse availability with data protection/security/safety.


    RAID is ONLY about availability and NOT related to any form of data protection. No one needs availability at home so instead of wasting resources for unneeded availability spend them on data protection/security/safety.


    Since it's 2017 in the meantime we can make use of some non-anachronistic new filesystem / volume manager combinations (ZFS and btrfs). So it's always a good idea to choose those since they allow for data integrity (checksumming) and ease versioned backups (periodic snapshots, sending them to another disk/location using 'zfs|btrfs send/receive', checking data integrity via periodic scrubs).


    When combined with redudancy these new approaches even allow for self-healing (use a zmirror, zraid/zraid2 or btrfs' raid1 and you get self-healing for free!).


    TL;DR: RAID at home is not cool or useful but plain stupid IF you waste resources for redundancy (RAID) you could've used for data protection/security/safety/integrity instead. When you think about the latter first (BACKUP FIRST! ALWAYS!) then RAID is ok... but still not that useful at home since 'protecting' only from whole drive failures.

    • Offizieller Beitrag

    So you actually don't know what happened?

    The short answer is "No" If you read this thread, start to finish, you know everything that we know. bogo_mipps did everything, in the proper order, for "generic" RAID recovery. The fact of the matter is, there is no way to know exactly what happened. Even if the cause was known, for our purposes as home or small business users, it wouldn't matter. There are so many possible RAID failure modes, we might never see the same exact failure twice.


    tkaiser is right in all of the details he laid out. RAID is an enterprise solution for availability, and it's expensive at that. (Cost is less of a consideration where there's anywhere from 50, and up, impatient users who demand 24x7 availability.) Even in the enterprise environment, RAID is thought of as "a disk". And just as it is with a single disk, a RAID array can fail. Why? Just as it is with a single disk, who cares? It's the failure must be dealt with. The forensics are unimportant until "after" the recovery, but there is no recovery without backup. In the enterprise, In the event of failure, there are usually two backups. One backup is on site and another is in a remote location.


    That brings us to the following:

    Since it's 2017 in the meantime we can make use of some non-anachronistic new filesystem / volume manager combinations (ZFS and btrfs). So it's always a good idea to choose those since they allow for data integrity (checksumming) and ease versioned backups (periodic snapshots, sending them to another disk/location using 'zfs|btrfs send/receive', checking data integrity via periodic scrubs).
    When combined with redudancy these new approaches even allow for self-healing (use a zmirror, zraid/zraid2 or btrfs' raid1 and you get self-healing for free!).


    I agree wholeheartedly with tkaiser in the above. I run BTRFS, for bitrot scrubbing, on my i3 main server. I have a backup server (an R-PI with a 4TB usb drive) that duplicates my 24x7 main server once a week and I have an old 32bit server that I power up once every 2 or 3 months, replicate shares, and shut it off again. If I lose my main 24x7 server, the R-PI can stand it. Other than the host name, it's shares are exact duplicates of my 24x7 server and it's been tested as a file server on my LAN. It's just be a matter of turning Samba on. The 32 bit box is just a storage box, but it's a 2nd backup of all data so there are 3 complete copies of my data, on three independent devices.


    So, I'll ask you: If you really want to keep your data, what makes more sense? A Hot Rod PC with a RAID array, or 3 independent devices with their own disks for the same price?


    What may be of interest to you is, other than spreading my disks around (versus throwing them into a RAID array), it's not expensive. An R-PI is $29 and, while it's not fast, it will run OMV. If one wants a "cold" storage server (off most of the time) any old box with SATA drive interfaces can be a file server.
    (**Note: Powered down most of the time is a scenario where spinning hard drives will last a very l-o-n-g time. If exercised for a few hours every couple months, spinning drives can last for decades. I have a few drives that are well over 10 years old.)


    Finally, there are a few legitimate reasons to run RAID at home but as this thread should demonstrate to all, when running RAID, backup is still a must.

  • I have a backup server (an R-PI with a 4TB usb drive) that duplicates my 24x7 main server once a week

    Fine if your backup fits on one disk.


    I don't have a NAS yet. I'm planning to build one and appear as ~15 GB single volume on my computer when connected. Can't do that without RAID or ZFS/BTRFS to merge several HD into one logical one.


    Looks like ZFS has at least as much oportunities to fail as RAID if not more. This is what I read on FreeNAS forum about ZFS:



    So ZFS seems very prone to failure if you don't spend thousands of $$$ on hardware, tens of GB of ECC RAM etc. And very inflexible.


    Is there a system designed where for example if I choose 25% redundancy and have 4 HDs, if any one fails, I lose no data? If 2 fail, I still can recover 66% of my data. If 3 fail, I can recover 33% of the data. Neither RAID nor ZFS/BTRFS can provide that to my knowledge, the fails result in catastrophic loss of all data. It shouldn't be like this, should it?


    It appears that there is no solution that would allow me in case of failure plug the still undamaged disks into my computer one by one and recover what is still recoverable, be it RAID, ZFS or BTRFS.

  • I don't have a NAS yet. I'm planning to build one and appear as ~15 GB single volume on my computer when connected. Can't do that without RAID or ZFS/BTRFS to merge several HD into one logical one.
    ...
    It appears that there is no solution that would allow me in case of failure plug the still undamaged disks into my computer one by one and recover what is still recoverable, be it RAID, ZFS or BTRFS.


    Mergerfs?

  • Thanks, MergerFS + SnapRAID might fit what I'm looking for!


    Can they be used together with OMV? Or maybe I don't even need OMV if I put MergerFS + SnapRAID on stock Debian and turn on some kind of file sharing?

  • So ZFS seems very prone to failure if you don't spend thousands of $$$ on hardware, tens of GB of ECC RAM etc. And very inflexible.

    No it is not. Spending thousand of $$$ is depending on your use case. I am running a Freenas server with ZFS on a very reasonable configuration (price wise). And the configuration is on par with the advised requirements for running Freenas. I won't bother you guys with al the model stuff etc.. But, leaving the disks out of the equation I spend less then 800 euro on my system. Compare that with a Qnap NAS with 8 bays for example.

    • Mobo (with ECC support but also IPMI) 225 euro
    • CPU (i3 with ECC support) 125 euro
    • Memory (16 GB, DDR3 ECC) 110 euro (would be DDR4 now)
    • Case 65 euro
    • 2 Hard drive cages (for four disks each) with backplane (hot swap) 140 euro
    • PSU 90 euro (don't be to cheap here it's an important part).
    • Cheap SSD (for boot of OS) 30 euro

    And you would be surprised what nice little systems you can find on ebay etc. for a couple of hundred dollars.


    Inflexible? I don't think so. Lots of choices how to configure your storage with all kinds of redundancy.

    Is there a system designed where for example if I choose 25% redundancy and have 4 HDs, if any one fails, I lose no data? If 2 fail, I still can recover 66% of my data. If 3 fail, I can recover 33% of the data. Neither RAID nor ZFS/BTRFS can provide that to my knowledge, the fails result in catastrophic loss of all data. It shouldn't be like this, should it?

    Well lets' take a RaidZx volumes for example. with RaidZ any 1 disk (in a pool of 4 disks that is 25%) can fail without any data loss. With RaidZ2 you will survive 2 failed disks without any data loss (for home use the most reasonable choice to my opinion). RaidZ3 you can guess. Of course if you loose more disks then is covered by your RaidZx configuration you loose the whole pool and thus all the data. But really the only answer to that is a decent back-up strategy. If you love your data you have backups. It's as simple as that.

    • Offizieller Beitrag

    Mergerfs?

    Since 15GB will easily fit on a single disk several times over - I'm going to assume that you actually meant 15TB


    I'd ask you this question;
    While you may have 15TB of data, why does it all have to be on a single volume?


    Regardless, there are 8TB drives out there and Nibb31 is right in that Mergerfs (Union FS) will merge 2 or 3 8TB disks to appear as a single 16TB or 24TB volume. This capability exists with OMV.
    (Since you wouldn't want to be close to 100% fill, you'd probably need 3 each 8TB disks or something similar. I'd want to house my data and have at least 20% unused disk space, and more, for room to grow.)


    Along other lines, for backup purposes, you might consider backing "chunks" of your 15TB to multiple disks and, preferably, those disks would be in a second server. Your call.
    ________________________________________________________________________


    In other questions:


    - I have under 4TB of data that I consider to be "irreplaceable". This is what I backup to the R-PI and it does have a single USB powered 4TB disk. It certainly isn't a speed demon, but it does a decent job of Rsync'ing network shares. More storage could be added to the R-PI with a USB hub and additional disks. ((But if needed more than 4TB for backing up data, I'd use something other than an R-PI. Running multiple disks on an R-PI with a USB hub would be pushing it, from my point of view.))
    - ZFS is flexible and it will do what you want to do but I see it as overly complicated. Further, recovery from potential problems takes more, in the way of very specific knowledge of ZFS, than I care to learn.
    - Decent hardware doesn't have to be expensive. There's no reason to spend thousands. I bought a new Lenovo TS140 Think Server ($250) that came with 8GB ECC, (no disks or OS) from Amazon.com. ((I bought an additional 4GB ECC for $20 delivered from Ebay. While used, MemTest86 had it as fine. 8GB ECC sticks, used, can be had on E-Bay for reasonable prices as well.)) So, I have a core i3 with 12GB ECC for well under $300 that is very capable of running ZFS. Lastly, the used server market has an abundance of options.

    Thanks, MergerFS + SnapRAID might fit what I'm looking for!


    Can they be used together with OMV?

    Yes they can.
    The plugins for OMV are openmediavault-unionfilesystems 3.1.17 and openmediavault-snapraid 3.7.1



    Or maybe I don't even need OMV if I put MergerFS + SnapRAID on stock Debian and turn on some kind of file sharing?

    You could do this as well but, from your point of view, what is "stock Debian"? A command line server distribution? I can tell you from experience in running a command line Web server, turning on "some kind of file sharing" from the command line can be a royal pain in the @$$. If you use a desktop edition, resources are consumed.


    If you want relative ease of configuration, OMV is going to make things one heck of a lot easier and if you want some form of GUI to work from, using OMV as a NAS server is about as lean as it gets for resources consumed.
    (Note that OMV will run on an R-PI, which has a whole 1GB of ram and a "not-so-fast" ARM processor.)


    But, as it is with most things, we all have our preferences. Good luck with your build and whatever you chose for a NAS OS.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!