RAID5 failed drive - Best options?

  • Hey guys,


    beginning of this week my one of my drives in my encrypted LUKS RAID5 gave up. So right now i can't really use anything on the drives.


    My questions right now is how to proceed the best way. I would like to do the following things:

    • Use the array and access the data until i've rebuild the array.
    • Replace the 4TB drive with larger ones

    So is there a possibility without the fifth drive to bring the array in a state for usage?

    Is my best way to go:

    • staying with RAID5?
    • start to replace the first broken drive and rebuild the array?
    • replace all the other smaller drives with larger drives and rebuild each time and in the end expanding the array?


    Recommendations and feedback are more than welcome and appreciated!


    Thank you!

    str0hlke

  • Hi geaves,

    I normally stay clear of encrypted arrays but to help you with your first question post the output of each of these

    Thank you very much for the support!


    cat /proc/mdstat

    Code
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid5 sdb[1] sdf[4] sde[3] sdc[2]
          15627548672 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [_UUUU]
    
    unused devices: <none>


    blkid

    Code
    /dev/sda: UUID="745727ef-7303-4570-bb97-f0c68f8de041" UUID_SUB="d5b083b0-3c28-bbbc-956c-868d16fcfe17" LABEL="NAS:storeme" TYPE="linux_raid_member"
    /dev/sdc: UUID="745727ef-7303-4570-bb97-f0c68f8de041" UUID_SUB="d16a068c-6fab-0715-2a37-b11f4fc236c4" LABEL="NAS:storeme" TYPE="linux_raid_member"
    /dev/sdd1: UUID="e7146065-a39c-44ce-9121-640b4926a3b7" TYPE="ext4" PARTUUID="0005bce4-01"
    /dev/sdd5: UUID="a693916f-cda3-4111-86a5-99771a662b18" TYPE="swap" PARTUUID="0005bce4-05"
    /dev/sde: UUID="745727ef-7303-4570-bb97-f0c68f8de041" UUID_SUB="7425d23b-a24b-4e81-589c-e253f9c5e168" LABEL="NAS:storeme" TYPE="linux_raid_member"
    /dev/sdb: UUID="745727ef-7303-4570-bb97-f0c68f8de041" UUID_SUB="c31ded79-090f-dd7b-16ea-256e7be3945f" LABEL="NAS:storeme" TYPE="linux_raid_member"
    /dev/sdf: UUID="745727ef-7303-4570-bb97-f0c68f8de041" UUID_SUB="a2227676-7a7c-c331-3bdd-166e2fae6377" LABEL="NAS:storeme" TYPE="linux_raid_member"
    /dev/md127: UUID="492ce4e0-26a5-412e-a0b6-9195c9523017" TYPE="crypto_LUKS"
    /dev/mapper/candy: UUID="64c8ec70-528a-4817-a69b-096d85b6c0e9" TYPE="ext4"

    fdisk -l | grep "Disk "

    cat /etc/mdadm/mdadm.conf

    mdadm --detail --scan --verbose

    Code
    ARRAY /dev/md127 level=raid5 num-devices=5 metadata=1.2 name=NAS:storeme UUID=745727ef:73034570:bb97f0c6:8f8de041
       devices=/dev/sdb,/dev/sdc,/dev/sde,/dev/sdf

    Post type of drives and quantity being used as well.

    • WD RED 4TB
    • 5 drives

    Post what happened for the array to stop working? Reboot? Power loss?

    • Since it's a NAS Server it is running 24/7, just realized that data/ raid was not accessable.


    Thank you!


    Best,

    s0l

    • Offizieller Beitrag

    According to the output the raid is active therefore the data should be accessible, mdstat confirms that, if it's not I would suspect that is has something to do with the encryption.


    The missing drive from the array is dev/sda but you still have it attached, which is fine.


    AFAIK you have to remove the encryption or unlock it (this is why I usually defer from these posts) then remove the old drive, install a new one, wipe it then add it to the array, the array will rebuild once the rebuild has completed then enable encryption.


    All the drives in the array are the same size so replacing that failed drive is straightforward, however, you mention increasing the space/size to do that you would really need to start again. Replacing a drive one at a time and rebuilding adds additional stress to the older drives and more than one could fail during that process.


    The norm is 4 drives for Raid5 this allows for one drive failure, once you get over 4 you should use Raid6 which allows for 2 drive failures but in your case would reduce the space available for storage.


    Another option is to use the unionfs/mergerfs and snapraid, replace the failed drive with say a 6TB for the snapraid parity and use the existing 4x4TB for storage. Can you use Luks on this, I don't know but I would assume you can, the one caveat with this is that snapraid is recommended for media type storage whereby you are not accessing your data constantly. The one advantage is that you can use mismatched drive sizes provided the snapraid parity drive/s is/are the largest. I think the consensus is one parity drive to 4 data drives.


    I personally moved away from using Raid for home use and switched to the above, whilst it is easier to maintain it still requires some cli intervention should things go wrong, but it's a lot less hassle than a raid array. But whatever you do a backup is a must, a raid setup or the above is no substitution for not running a backup.

  • Hi geaves,


    Zitat

    According to the output the raid is active therefore the data should be accessible, mdstat confirms that, if it's not I would suspect that is has something to do with the encryption.

    Yes, i'm able to activate the raid, decrypt it and mount the drive. But the access is somehow limited. Somethings are not accessible and sometimes listing dirs doesn't generate any output.


    The norm is 4 drives for Raid5 this allows for one drive failure, once you get over 4 you should use Raid6 which allows for 2 drive failures but in your case would reduce the space available for storage.


    Another option is to use the unionfs/mergerfs and snapraid, replace the failed drive with say a 6TB for the snapraid parity and use the existing 4x4TB for storage. Can you use Luks on this, I don't know but I would assume you can, the one caveat with this is that snapraid is recommended for media type storage whereby you are not accessing your data constantly. The one advantage is that you can use mismatched drive sizes provided the snapraid parity drive/s is/are the largest. I think the consensus is one parity drive to 4 data drives.


    I personally moved away from using Raid for home use and switched to the above, whilst it is easier to maintain it still requires some cli intervention should things go wrong, but it's a lot less hassle than a raid array. But whatever you do a backup is a must, a raid setup or the above is no substitution for not running a backup.

    I'm considering to backup the most recent data/ backups (4TB), so that i can wipe the array and start over again. Rebuilding and resizing the existing array seems to be very time consuming and might end up with a total loss.

    Snapraid seems to be very interesting due to the management affords and flexibility concern drive sizes. I've also found a blog outlining the omv setup with snapraid and LUKS encryption (https://michaelxander.com/diy-nas/).

    I've read here that per 10TB, 1GB of RAM is beeing used? Can you confirm this? Because if i buy 5x10TB it means i need 4 GB RAM just for snapraid?


    The caveat you've outlined is due to the access times/ drive speeds of this drive pooling vs raid? Or what is the reason for snapraid not beeing recommended for constant data access?


    Thanks alot!


    Best,

    s0l

    • Offizieller Beitrag

    I've read here that per 10TB, 1GB of RAM is beeing used

    I believe that is related to when a sync is being performed, as this can be set as a scheduled job and run overnight. The Snapraid site will give more information than I can, I also found this on sourceforge

    Or what is the reason for snapraid not beeing recommended for constant data access

    The snapraid site will explain more than I can but I would suspect it has something to do with the checksum and parity, so running a db which is constantly accessed would make snapraid non viable.

    The caveat you've outlined is due to the access times/ drive speeds of this drive pooling vs raid

    Personally I have not noticed any difference and I am using mismatched drive sizes and manufacturers, most of my data is media, but I have a general share, one for the wife for her teaching stuff and my own.


    If they don't mind I'll tag a couple of users on the why's and wherefores of going down this route gderf crashtest


    Yes, i'm able to activate the raid, decrypt it and mount the drive. But the access is somehow limited. Somethings are not accessible and sometimes listing dirs doesn't generate any output.

    This could be of concern, a failed drive within an array should not prevent the data from being accessible albeit slower. Could this be caused by the encryption or possibly further hardware related issues which have yet to come to light i.e. another drive degrading.

    • Offizieller Beitrag

    The caveat you've outlined is due to the access times/ drive speeds of this drive pooling vs raid?

    One of RAID5's "features" is parallel drive I/O. With striped writes and a good controller, RAID drive performance can exceed the maximum throughput of a single drive. This may be helpful where there are a LOT of users, who create a LOT on concurrent read activity, AND when used with a server that has multiple network connections.

    If 1GB Ethernet is being used, the network is a hard bottle neck. The output of a single drive, with today's controllers (3 to 6GB) can saturate 1GB Ethernet. Accordingly, for a file server used at home, the increased throughput provided by traditional RAID is (for the most part) irrelevant.


    Or what is the reason for snapraid not beeing recommended for constant data access?

    The difference between traditional RAID and SNAPRAID is, traditional RAID calculates parity for write operations on the fly where SNAPRAID does it on demand. (The SYNC operation.)

    Databases (MySQL and others) and similar app's may open files and hold them open continuously. This is a problem from many backup programs and SNAPSHOT'ing file systems, to include traditional RAID under some circumstances (like when power is lost without a battery on the controller). Some schemes use scripts to "quiet down" write activity while a SNAPSHOT is taken or a backup is created. In any case, SNAPRAID does not deal with dynamic data sets well which are files that are being modified continuously. Again, this is not a problem for most home servers and users.  Even if a DB is restored to an inconsistent state, most DB app's have utilities to clean them up.


    Another reason why SNAPRAID is more geared toward "static data" is:
    SNAPRAID is also considered to be a type of backup, but it doesn't do incremental backups and there is one (1) state of backup. Essentially, that means that a failed drive or a file or folder can be restored to the state it was in as of the last SYNC operation. Without getting mired down in the details, for a variety of reasons, it makes sense to set SNAPRAID SYNC intervals to one or two weeks. (SYNC operations should not be done too frequently.) Since there is only one (1) state to restore to, in the event of a failure, all files that are added to the SNAPRAID Array after the last SYNC operation may be lost. (Notionally, depending on the SYNC period interval, that would be files added to a failed drive in the last week or two.) Again, for home use, this is not a big deal. If a lot of work is done, running a manual SYNC operation is easy enough.
    ________________________________________________________

    BTW: While they may be used to restore a failed drive and/or files and folders to a previous state, RAID, SNAPRAID, and SNAPSHOTS are NOT backup. Backup is best characterized by having a full and completely independent second copy of your data.

  • First of all, a big thank you to geaves & crashtest for your input, explenations and support here!


    This could be of concern, a failed drive within an array should not prevent the data from being accessible albeit slower. Could this be caused by the encryption or possibly further hardware related issues which have yet to come to light i.e. another drive degrading.

    I think another drive is failing soon.



    One of RAID5's "features" is parallel drive I/O....

    ...provided by traditional RAID is (for the most part) irrelevant.


    I/O speed is for sure irrelevant in my usecase.


    To sumarize the status quo:


    • Actual RAID5 is degraded due to one failed drive, i'm expecting the next one to fail more sooner than later.
    • The plan of replacing drives (with larger drives) one by one and resizing the array will most likely end in more work, more failed drives, than rebooting the setup.


    My requirements on the storage / my server:


    1. Storing media files (videos, pictures)
    2. Storing incremental backups of all systems in the network (MAC, Win, Linux)
    3. Hosting of some smaller Web-Applications, Services, Databases (production & dev), VirtualMachines (VBox) and containers (Docker)


    Based on my requirements i don't see a pure Snapraid as a perfect fit. Therefore i identified two possible ways forward for my planned 5x10TB setup:


    a) Setting up Snapraid for 3 drives, 1 drive holding the parity data, with a UnionFS and encryption via LUKS for storing media files and the incremental backups. The two other drives would be setup in a RAID1.

    b) Setting up a RAID6.


    With option b) i will most likely end up in the situation i'm in today. Option a) gives me a lot of flexibility due to snapraid on one hand and on the other the uptime due to parity for Web-Applications, services...i need. Most likely it will also increase the setup complexity but besides an initial sync script for snap i don't see any more maintenance afford in a 24/7 runtime setup. (?)


    Thanks again to the both of you!


    Best,

    s0l

    • Offizieller Beitrag

    Setting up a RAID6.

    That at least would allow for two drive failures but the rebuild time with the larger drives would take time.


    What about another way, just using individual drives, then rsync or rsnapshot to a second, the rsync or rsnapshot could be run overnight, maybe not every night and could even be spun down when not in use.


    I think another drive is failing soon.

    That's not good, that happened to me, replaced a drive as the raid was rebuilding a second drive fell over, had to start again and restore from backup

    • Offizieller Beitrag

    I think another drive is failing soon.

    If you don't have backup, I would focus on backing up, before attempting a resilvering operation. A single copy operation, to get your data onto a large internal or external disk, will cause the least wear and tear on your failing drive.

    An Rsync command line operation would do the trick.

    As an example:

    rsync -av /srv/dev-disk-by-label-NAMEofSOURCEdisk/ /srv/dev-disk-by-label-NAMEofDESTINATIONdisk/

    In your case the source disk would be the mount point of your RAID array.

    My requirements on the storage / my server:

    Storing media files (videos, pictures)
    Storing incremental backups of all systems in the network (MAC, Win, Linux)
    Hosting of some smaller Web-Applications, Services, Databases (production & dev), VirtualMachines (VBox) and containers (Docker)

    I'd agree that SNAPRAID, by itself, is not ideal for your requirements. I'll comment on the reasons later.

    • Offizieller Beitrag

    Regarding your requirements:


    The unionFS plugin (really mergerfs) and SNAPRAID are good for combining drives and for the storage of data files but there are some limitations. For example, mergerfs uses a method of distributing files between drives that resembles overlayfs. Overlayfs is also an integrated part of the design of Dockers. So, depending on the storage policy used when setting up the unionFS plugin, if Dockers are stored in the drive union, strange behavior may occur. In a similar manner, VM's and client backup app's create file sets that contain several small and deeply nested files that may not get along well with Overlayfs.

    An ounce of prevention is worth a pound of cure.
    It's easy to avoid strange issues by keeping things simple. For backing up clients, and as a location for Dockers and VM storage, I use a separate 4TB drive dedicated to those purposes that's formatted to EXT4. It's a utility disk. My reasoning was; client backups can be massive, especially if household members store data, locally, at the client. (They should be encouraged to store data on the server, BTW.) Moreover, there's no need to further "backup", client backup sets. A separate utility disk segregates client backups from your data which may be irreplaceable.

    These are just some thoughts to mull over as you setup and configure. Others may have other ideas.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!