How to remove a drive from SnapRAID + UnionFS?

    • OMV 4.x
    • How to remove a drive from SnapRAID + UnionFS?

      So, it turns out one of my drives are failing and I need to send it for RMA.

      My problem is that after having removed the drive from the system, having first copied all the data to a different drive in the SnapRAID, 1. SnapRAID is refusing to behave 2. even though I deleted the drive in the OMV GUI, SnapRAID says the drive is missing and I can't sync which leads to 3. my data is inaccessible, despite being on the drives.

      I removed the hard drive both in the SnapRAID UI and unders Disks. So what am I supposed to do now to make it all work again, as there are no explanations on how to actually remove a drive in the SnapRAID manual, it just tells you how to replace one. Does this mean my NAS is out of commission until my replacement drive arrives? If so, then SnapRAID is mostly pointless, as it caused more headache than it's supposed to solve. :thumbdown:

      Source Code

      1. root@OMV:/srv# snapraid sync -E
      2. Self test...
      3. Loading state from /srv/dev-disk-by-id-ata-TOSHIBA_HDWQ140_Z79ZK0SYFPBE-part1/snapraid.content...
      4. Decoding error in '/srv/dev-disk-by-id-ata-TOSHIBA_HDWQ140_Z79ZK0SYFPBE-part1/snapraid.content' at offset 90
      5. The file CRC is correct!
      6. Disk 'sda' with uuid 'a5751f9a-7dbe-4afe-abc3-11bc406d4c66' not present in the configuration file!
      7. If you have removed it from the configuration file, please restore it
      8. root@OMV:/srv#
      OMV 4.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet
    • TheLostSwede wrote:

      Does this mean my NAS is out of commission until my replacement drive arrives? If so, then SnapRAID is mostly pointless, as it caused more headache than it's supposed to solve
      Snapraid doesn't have to be used while the drive is missing. So, you should be able to keep using the drives until you get the new one. You just won't have proper redundancy. A raid5 array would be in the same situation.

      TheLostSwede wrote:

      my data is inaccessible, despite being on the drives.
      Snapraid doesn't provide access to data. I don't know how it affected your data unless you mean the data on the failed drive (which I assume was not the parity drive). With a raid5 array, when you lose a drive, you still have access to all data. Snapraid doesn't work that way. It does allow you to recover the data but not until you replace the drive (as you have found).
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I haven't lost any data, as I copied it over from the failing primary drive to the secondary (empty) drive, in the SnapRAID, but apparently SnapRAID doesn't want to play along with that, even though that seems to be what their instructions tells you to do in case of a drive failure.
      As such, all the data is there, but because I can't remove the failed drive somehow, I can't sync and make SnapRAID understand that the data is there.
      There are also zero instruction on how to remove a drive from SnapRAID, only how to replace one.
      I have more than enough empty space on the other two drives for the data, but apparently that's a no go...
      OMV 4.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet
    • gderf wrote:

      TheLostSwede wrote:

      There are also zero instruction on how to remove a drive from SnapRAID, only how to replace one.
      Not correct. See:
      snapraid.it/faq#remdatadisk
      If only that worked. See first post.
      The instructions on how to change the config file is also missing, which is kind of a critical part.
      Just deleting the drive details doesn't work...
      OMV 4.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet
    • TheLostSwede wrote:

      gderf wrote:

      TheLostSwede wrote:

      There are also zero instruction on how to remove a drive from SnapRAID, only how to replace one.
      Not correct. See:snapraid.it/faq#remdatadisk
      The instructions on how to change the config file is also missing, which is kind of a critical part.
      Just deleting the drive details doesn't work...

      Instructions not missing. Here are the steps from the FAQ.

      • Change in the configuration file the related "disk" option to point to an empty directory
      • Remove from the configuration file any "content" option pointing to such disk
      --
      Google is your friend and Bob's your uncle!

      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • TheLostSwede wrote:

      I haven't lost any data, as I copied it over from the failing primary drive to the secondary (empty) drive, in the SnapRAID, but apparently SnapRAID doesn't want to play along with that, even though that seems to be what their instructions tells you to do in case of a drive failure.
      As such, all the data is there, but because I can't remove the failed drive somehow, I can't sync and make SnapRAID understand that the data is there.
      There are also zero instruction on how to remove a drive from SnapRAID, only how to replace one.
      I have more than enough empty space on the other two drives for the data, but apparently that's a no go...
      I think what you did is fundamentally incorrect. You should have copied the data to a new drive and then replaced the failing drive with the new drive in the snapraid array. Then you could run a check and sync and be good to go.

      Since you did what you did, now you have a whole bunch of data out of place in your array. Assuming you haven't synced, (it seems snapraid is protecting you from doing so) you can still replace the failed drive and copy the data back and check/fix the array or simply replace the drive and fix the array in which case it will rebuild that drive using the parity but that will probably be slower.

      Or follow the instructions at the link gderf posted to decrease your array by 1 disk. You will not have parity protection until the resync is complete.
    • jollyrogr wrote:

      TheLostSwede wrote:

      I haven't lost any data, as I copied it over from the failing primary drive to the secondary (empty) drive, in the SnapRAID, but apparently SnapRAID doesn't want to play along with that, even though that seems to be what their instructions tells you to do in case of a drive failure.
      As such, all the data is there, but because I can't remove the failed drive somehow, I can't sync and make SnapRAID understand that the data is there.
      There are also zero instruction on how to remove a drive from SnapRAID, only how to replace one.
      I have more than enough empty space on the other two drives for the data, but apparently that's a no go...
      I think what you did is fundamentally incorrect. You should have copied the data to a new drive and then replaced the failing drive with the new drive in the snapraid array. Then you could run a check and sync and be good to go.
      Since you did what you did, now you have a whole bunch of data out of place in your array. Assuming you haven't synced, (it seems snapraid is protecting you from doing so) you can still replace the failed drive and copy the data back and check/fix the array or simply replace the drive and fix the array in which case it will rebuild that drive using the parity but that will probably be slower.

      Or follow the instructions at the link gderf posted to decrease your array by 1 disk. You will not have parity protection until the resync is complete.
      And if you don't have a new drive? I need to RMA my drive. I only have these two drives to copy the data to, as the fourth drive is the parity drive.

      I need access to my data while I RMA my drive, otherwise running any kind of silly crap like this is pointless. It's supposed to protect my data, no? Not make it inaccessible.


      gderf wrote:

      TheLostSwede wrote:

      gderf wrote:

      TheLostSwede wrote:

      There are also zero instruction on how to remove a drive from SnapRAID, only how to replace one.
      Not correct. See:snapraid.it/faq#remdatadisk
      The instructions on how to change the config file is also missing, which is kind of a critical part.Just deleting the drive details doesn't work...
      Instructions not missing. Here are the steps from the FAQ.

      • Change in the configuration file the related "disk" option to point to an empty directory
      • Remove from the configuration file any "content" option pointing to such disk

      And how do I do that from the OMV GUI? The config file is all greyed out...
      OMV 4.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet
    • TheLostSwede wrote:

      And if you don't have a new drive? I need to RMA my drive. I only have these two drives to copy the data to, as the fourth drive is the parity drive.
      I need access to my data while I RMA my drive, otherwise running any kind of silly crap like this is pointless. It's supposed to protect my data, no? Not make it inaccessible.


      And how do I do that from the OMV GUI? The config file is all greyed out...
      You don't have an adequate understanding of SnapRAID. Please read the FAQ and manual.

      Not all that can be or needs to be accomplished in SnapRAID can be handled in the GUI. You are going to have to use the command line and editor.
      --
      Google is your friend and Bob's your uncle!

      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • TheLostSwede,
      If availability is an issue, perhaps regular RAID would better suit you, although in either case if you're not prepared with replacement drives you might be in over your head. Businesses and networks that employ RAID systems don't wait to RMA a drive. In fact, they won't bother with a refurbished drive period. When a drive fails, a new one goes in and they move on.

      As was stated earlier, you can change your array from 3 drives to 2 and re-sync, but you'll need to read the manual and will have to use the command line.

      My recommendation would be to get a new drive ASAP and get your array repaired. Then RMA the bad drive and when its replacement comes, keep it for a spare for use in times like these.
    • gderf wrote:

      TheLostSwede wrote:

      And if you don't have a new drive? I need to RMA my drive. I only have these two drives to copy the data to, as the fourth drive is the parity drive.
      I need access to my data while I RMA my drive, otherwise running any kind of silly crap like this is pointless. It's supposed to protect my data, no? Not make it inaccessible.


      And how do I do that from the OMV GUI? The config file is all greyed out...
      You don't have an adequate understanding of SnapRAID. Please read the FAQ and manual.
      Not all that can be or needs to be accomplished in SnapRAID can be handled in the GUI. You are going to have to use the command line and editor.
      Which is why I'm asking for help here. The manual is pure nonsense to me at this point, so reading it, isn't helping as I can't find the answers I need.


      jollyrogr wrote:

      TheLostSwede,
      If availability is an issue, perhaps regular RAID would better suit you, although in either case if you're not prepared with replacement drives you might be in over your head. Businesses and networks that employ RAID systems don't wait to RMA a drive. In fact, they won't bother with a refurbished drive period. When a drive fails, a new one goes in and they move on.

      As was stated earlier, you can change your array from 3 drives to 2 and re-sync, but you'll need to read the manual and will have to use the command line.

      My recommendation would be to get a new drive ASAP and get your array repaired. Then RMA the bad drive and when its replacement comes, keep it for a spare for use in times like these.
      No, regular RAID is too much of a risk if a drive fails, as I only have a four bay NAS. Not everyone has the cash to have spare drives sitting on a shelf, unfortunately and this is why I picked something else, that was supposed to give close to the same redundancy, but clearly it doesn't work as promoted.

      Well, how? This is the problem, I can't figure out how and the so called manual doesn't provide instructions. I can get around in Linux, but I'm far from an expert at it and these kind of things makes me just want to throw it all out. It really gets to me that Linux is so convoluted when it comes to things that should be fairly logical to do and it's not the first time I've run into issues that apparently no-one else is having or simply aren't documented at all.

      Again, I don't have the cash to get another drive. I've already RMA:ed the bad drive, but it's going to take a week or two to get that back.

      So again, to my question, how do I shrink the SnapRAID to three disks so I can access my data?
      OMV 4.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet
    • What do you mean, so you can access your data? You said you copied your data from the failing drive to one of the others. You should be able to see that data. If not, something else has happened...

      The instructions are actually pretty straightforward. Have you read snapraid.it/manual in addition to the FAQ?


      How can I remove a data disk from an existing array?

      To remove a data disk from the array do:
      • Change in the configuration file the related "disk" option to point to an empty directory
      • Remove from the configuration file any "content" option pointing to such disk
      • Run a "sync" command with the "-E, --force-empty" option:
      snapraid sync -E
      The "-E" option tells at SnapRAID to proceed even when detecting an empty disk.
      • When the "sync" command terminates, remove the "disk" option from the configuration file.
      Your array is now without any reference to the removed disk.
      So edit /etc/snapraid.conf to remove the failed drive. The configuration file should look something like this:

      parity /mnt/diskp/snapraid.parity
      content /var/snapraid/snapraid.content
      content /mnt/disk1/snapraid.content
      content /mnt/disk2/snapraid.content
      data d1 /mnt/disk1/
      data d2 /mnt/disk2/
      data d3 /mnt/disk3/

      Note it says to point it too an empty directory, so you can't just remove the reference entirely.
      Then run snapraid sync -E
      Then you can remove the disk reference from snapraid.conf

      The post was edited 1 time, last by jollyrogr ().

    • There is nothing in SnapRAID that can prevent access to your data. If you can't access the data on your remaining data drives, then something else went wrong. You should also remove the failed drive from your UnionFS pool. Don't run ANY SnapRaid commands until you are positioned to recover the lost drive or are able to reconfigure it properly per the manual and FAQ.
      --
      Google is your friend and Bob's your uncle!

      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • gderf wrote:

      There is nothing in SnapRAID that can prevent access to your data. If you can't access the data on your remaining data drives, then something else went wrong. You should also remove the failed drive from your UnionFS pool. Don't run ANY SnapRaid commands until you are positioned to recover the lost drive or are able to reconfigure it properly per the manual and FAQ.
      Again, the manual as is as helpful as not at all.
      The drive has already been removed, both from SnapRAID, UnionFS and the disk management interface on the NAS and physically removed, packaged up and sent off for RMA. It's the only way I'm going to get a replacement drive.
      I just don't know how to remove it from the config file, which is what I need help with at this moment.
      OMV 4.x, Gigabyte Z270N-WiFi, i7-6700K@3GHz, 16GB DDR4-3000, 4x 4TB Toshiba N300, 1x 60GB Corsair GT SSD (OS drive), 10Gbps Aquantia Ethernet
    • Thanks Macom.
      Once a mod shows up I suppose they can delete this one and approve the earlier one.


      What do you mean, so you can access your data? You said you copied your data from the failing drive to one of the others. You should be able to see that data. If not, something else has happened...


      The instructions are actually pretty straightforward. Have you read snapraid.it/manual in addition to the FAQ?



      How can I remove a data disk from an existing array?

      To remove a data disk from the array do:


      • Change in the configuration file the related "disk" option to point to an empty directory
      • Remove from the configuration file any "content" option pointing to such disk
      • Run a "sync" command with the "-E, --force-empty" option:
      snapraid sync -E
      The "-E" option tells at SnapRAID to proceed even when detecting an empty disk.

      • When the "sync" command terminates, remove the "disk" option from the configuration file.
      Your array is now without any reference to the removed disk.
      [/quote]
      So edit /etc/snapraid.conf to remove the failed drive. The configuration file should look something like this:


      parity /mnt/diskp/snapraid.parity
      content /var/snapraid/snapraid.content
      content /mnt/disk1/snapraid.content
      content /mnt/disk2/snapraid.content
      data d1 /mnt/disk1/
      data d2 /mnt/disk2/
      data d3 /mnt/disk3/
      [/quote]
      Note it says to point it too an empty directory, so you can't just remove the reference entirely.
      Then run snapraid sync -E
      Then you can remove the disk reference from snapraid.conf