How to remove a drive from SnapRAID + UnionFS?

  • So, it turns out one of my drives are failing and I need to send it for RMA.


    My problem is that after having removed the drive from the system, having first copied all the data to a different drive in the SnapRAID, 1. SnapRAID is refusing to behave 2. even though I deleted the drive in the OMV GUI, SnapRAID says the drive is missing and I can't sync which leads to 3. my data is inaccessible, despite being on the drives.


    I removed the hard drive both in the SnapRAID UI and unders Disks. So what am I supposed to do now to make it all work again, as there are no explanations on how to actually remove a drive in the SnapRAID manual, it just tells you how to replace one. Does this mean my NAS is out of commission until my replacement drive arrives? If so, then SnapRAID is mostly pointless, as it caused more headache than it's supposed to solve. :thumbdown:


    Code
    root@OMV:/srv# snapraid sync -E
    Self test...
    Loading state from /srv/dev-disk-by-id-ata-TOSHIBA_HDWQ140_Z79ZK0SYFPBE-part1/snapraid.content...
    Decoding error in '/srv/dev-disk-by-id-ata-TOSHIBA_HDWQ140_Z79ZK0SYFPBE-part1/snapraid.content' at offset 90
    The file CRC is correct!
    Disk 'sda' with uuid 'a5751f9a-7dbe-4afe-abc3-11bc406d4c66' not present in the configuration file!
    If you have removed it from the configuration file, please restore it
    root@OMV:/srv#

    OMV 6.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet

    • Offizieller Beitrag

    Does this mean my NAS is out of commission until my replacement drive arrives? If so, then SnapRAID is mostly pointless, as it caused more headache than it's supposed to solve

    Snapraid doesn't have to be used while the drive is missing. So, you should be able to keep using the drives until you get the new one. You just won't have proper redundancy. A raid5 array would be in the same situation.

    my data is inaccessible, despite being on the drives.

    Snapraid doesn't provide access to data. I don't know how it affected your data unless you mean the data on the failed drive (which I assume was not the parity drive). With a raid5 array, when you lose a drive, you still have access to all data. Snapraid doesn't work that way. It does allow you to recover the data but not until you replace the drive (as you have found).

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I haven't lost any data, as I copied it over from the failing primary drive to the secondary (empty) drive, in the SnapRAID, but apparently SnapRAID doesn't want to play along with that, even though that seems to be what their instructions tells you to do in case of a drive failure.
    As such, all the data is there, but because I can't remove the failed drive somehow, I can't sync and make SnapRAID understand that the data is there.
    There are also zero instruction on how to remove a drive from SnapRAID, only how to replace one.
    I have more than enough empty space on the other two drives for the data, but apparently that's a no go...

    OMV 6.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet

  • The instructions on how to change the config file is also missing, which is kind of a critical part.
    Just deleting the drive details doesn't work...


    Instructions not missing. Here are the steps from the FAQ.


    • Change in the configuration file the related "disk" option to point to an empty directory
    • Remove from the configuration file any "content" option pointing to such disk

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • I haven't lost any data, as I copied it over from the failing primary drive to the secondary (empty) drive, in the SnapRAID, but apparently SnapRAID doesn't want to play along with that, even though that seems to be what their instructions tells you to do in case of a drive failure.
    As such, all the data is there, but because I can't remove the failed drive somehow, I can't sync and make SnapRAID understand that the data is there.
    There are also zero instruction on how to remove a drive from SnapRAID, only how to replace one.
    I have more than enough empty space on the other two drives for the data, but apparently that's a no go...

    I think what you did is fundamentally incorrect. You should have copied the data to a new drive and then replaced the failing drive with the new drive in the snapraid array. Then you could run a check and sync and be good to go.


    Since you did what you did, now you have a whole bunch of data out of place in your array. Assuming you haven't synced, (it seems snapraid is protecting you from doing so) you can still replace the failed drive and copy the data back and check/fix the array or simply replace the drive and fix the array in which case it will rebuild that drive using the parity but that will probably be slower.


    Or follow the instructions at the link gderf posted to decrease your array by 1 disk. You will not have parity protection until the resync is complete.

  • I think what you did is fundamentally incorrect. You should have copied the data to a new drive and then replaced the failing drive with the new drive in the snapraid array. Then you could run a check and sync and be good to go.
    Since you did what you did, now you have a whole bunch of data out of place in your array. Assuming you haven't synced, (it seems snapraid is protecting you from doing so) you can still replace the failed drive and copy the data back and check/fix the array or simply replace the drive and fix the array in which case it will rebuild that drive using the parity but that will probably be slower.


    Or follow the instructions at the link gderf posted to decrease your array by 1 disk. You will not have parity protection until the resync is complete.

    And if you don't have a new drive? I need to RMA my drive. I only have these two drives to copy the data to, as the fourth drive is the parity drive.


    I need access to my data while I RMA my drive, otherwise running any kind of silly crap like this is pointless. It's supposed to protect my data, no? Not make it inaccessible.



    Instructions not missing. Here are the steps from the FAQ.


    • Change in the configuration file the related "disk" option to point to an empty directory
    • Remove from the configuration file any "content" option pointing to such disk

    And how do I do that from the OMV GUI? The config file is all greyed out...

    OMV 6.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet

  • And if you don't have a new drive? I need to RMA my drive. I only have these two drives to copy the data to, as the fourth drive is the parity drive.
    I need access to my data while I RMA my drive, otherwise running any kind of silly crap like this is pointless. It's supposed to protect my data, no? Not make it inaccessible.



    And how do I do that from the OMV GUI? The config file is all greyed out...

    You don't have an adequate understanding of SnapRAID. Please read the FAQ and manual.


    Not all that can be or needs to be accomplished in SnapRAID can be handled in the GUI. You are going to have to use the command line and editor.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • TheLostSwede,
    If availability is an issue, perhaps regular RAID would better suit you, although in either case if you're not prepared with replacement drives you might be in over your head. Businesses and networks that employ RAID systems don't wait to RMA a drive. In fact, they won't bother with a refurbished drive period. When a drive fails, a new one goes in and they move on.


    As was stated earlier, you can change your array from 3 drives to 2 and re-sync, but you'll need to read the manual and will have to use the command line.


    My recommendation would be to get a new drive ASAP and get your array repaired. Then RMA the bad drive and when its replacement comes, keep it for a spare for use in times like these.

  • You don't have an adequate understanding of SnapRAID. Please read the FAQ and manual.
    Not all that can be or needs to be accomplished in SnapRAID can be handled in the GUI. You are going to have to use the command line and editor.

    Which is why I'm asking for help here. The manual is pure nonsense to me at this point, so reading it, isn't helping as I can't find the answers I need.



    TheLostSwede,
    If availability is an issue, perhaps regular RAID would better suit you, although in either case if you're not prepared with replacement drives you might be in over your head. Businesses and networks that employ RAID systems don't wait to RMA a drive. In fact, they won't bother with a refurbished drive period. When a drive fails, a new one goes in and they move on.


    As was stated earlier, you can change your array from 3 drives to 2 and re-sync, but you'll need to read the manual and will have to use the command line.


    My recommendation would be to get a new drive ASAP and get your array repaired. Then RMA the bad drive and when its replacement comes, keep it for a spare for use in times like these.

    No, regular RAID is too much of a risk if a drive fails, as I only have a four bay NAS. Not everyone has the cash to have spare drives sitting on a shelf, unfortunately and this is why I picked something else, that was supposed to give close to the same redundancy, but clearly it doesn't work as promoted.


    Well, how? This is the problem, I can't figure out how and the so called manual doesn't provide instructions. I can get around in Linux, but I'm far from an expert at it and these kind of things makes me just want to throw it all out. It really gets to me that Linux is so convoluted when it comes to things that should be fairly logical to do and it's not the first time I've run into issues that apparently no-one else is having or simply aren't documented at all.


    Again, I don't have the cash to get another drive. I've already RMA:ed the bad drive, but it's going to take a week or two to get that back.


    So again, to my question, how do I shrink the SnapRAID to three disks so I can access my data?

    OMV 6.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet

  • What do you mean, so you can access your data? You said you copied your data from the failing drive to one of the others. You should be able to see that data. If not, something else has happened...


    The instructions are actually pretty straightforward. Have you read https://www.snapraid.it/manual in addition to the FAQ?



    Zitat

    How can I remove a data disk from an existing array?


    To remove a data disk from the array do:

    • Change in the configuration file the related "disk" option to point to an empty directory
    • Remove from the configuration file any "content" option pointing to such disk
    • Run a "sync" command with the "-E, --force-empty" option:

    snapraid sync -E
    The "-E" option tells at SnapRAID to proceed even when detecting an empty disk.

    • When the "sync" command terminates, remove the "disk" option from the configuration file.

    Your array is now without any reference to the removed disk.

    So edit /etc/snapraid.conf to remove the failed drive. The configuration file should look something like this:



    Note it says to point it too an empty directory, so you can't just remove the reference entirely.
    Then run snapraid sync -E
    Then you can remove the disk reference from snapraid.conf

  • There is nothing in SnapRAID that can prevent access to your data. If you can't access the data on your remaining data drives, then something else went wrong. You should also remove the failed drive from your UnionFS pool. Don't run ANY SnapRaid commands until you are positioned to recover the lost drive or are able to reconfigure it properly per the manual and FAQ.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • There is nothing in SnapRAID that can prevent access to your data. If you can't access the data on your remaining data drives, then something else went wrong. You should also remove the failed drive from your UnionFS pool. Don't run ANY SnapRaid commands until you are positioned to recover the lost drive or are able to reconfigure it properly per the manual and FAQ.

    Again, the manual as is as helpful as not at all.
    The drive has already been removed, both from SnapRAID, UnionFS and the disk management interface on the NAS and physically removed, packaged up and sent off for RMA. It's the only way I'm going to get a replacement drive.
    I just don't know how to remove it from the config file, which is what I need help with at this moment.

    OMV 6.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet

  • Thanks Macom.
    Once a mod shows up I suppose they can delete this one and approve the earlier one.



    What do you mean, so you can access your data? You said you copied your data from the failing drive to one of the others. You should be able to see that data. If not, something else has happened...



    The instructions are actually pretty straightforward. Have you read snapraid.it/manual in addition to the FAQ?




    How can I remove a data disk from an existing array?


    To remove a data disk from the array do:



    • Change in the configuration file the related "disk" option to point to an empty directory
    • Remove from the configuration file any "content" option pointing to such disk
    • Run a "sync" command with the "-E, --force-empty" option:

    snapraid sync -E
    The "-E" option tells at SnapRAID to proceed even when detecting an empty disk.


    • When the "sync" command terminates, remove the "disk" option from the configuration file.

    Your array is now without any reference to the removed disk.
    [/quote]
    So edit /etc/snapraid.conf to remove the failed drive. The configuration file should look something like this:



    parity /mnt/diskp/snapraid.parity
    content /var/snapraid/snapraid.content
    content /mnt/disk1/snapraid.content
    content /mnt/disk2/snapraid.content
    data d1 /mnt/disk1/
    data d2 /mnt/disk2/
    data d3 /mnt/disk3/
    [/quote]
    Note it says to point it too an empty directory, so you can't just remove the reference entirely.
    Then run snapraid sync -E
    Then you can remove the disk reference from snapraid.conf

  • As I have already said there is nothing in or about SnapRAID that can prevent access to your data. You are looking in the wrong place for the source of your problem.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!