Exchanging Parity Drive in Snapraid

  • I am in the process of changing my Snapraid parity drive against a larger one. But ran into some trouble. Here is what i did:


    - Copied the content of the old drive to the new drive

    - Added the new drive to the Snapraid Array

    - Removed the old drive from the Array


    Now i started a sync from commandline and get this:


    Code
    WARNING! The Parity parity has data only 0 blocks instead of 7269213.
    DANGER! One or more the parity files are smaller than expected!
    It's possible that the parity disks are not mounted.
    If instead you are adding a new parity level, you can 'sync' using
    'snapraid --force-full sync' to force a full rebuild of the parity.


    So it seems like it is not aware of the changeover.

    I checked the content of the disk and the files are there including the parity file.

  • This is the guide i followed.

    When i do a "snapraid fix -d parity" it tries to generate the parity on the old unmounted parity drive.

    When i try on the new one it fails not finding the drive:


    Code
    root@debnas:/srv/dev-disk-by-label-ParityWD# snapraid fix -d ParityDrive4TB
    Self test...
    Option -d, --filter-disk ParityDrive4TB doesn't match any data or parity disk.


    ParityWD was the old drive and ParityDrive4TB is the new drive



    What is the "name" of the disk? Does it not correspond to the "name" in the array?

  • The way to avoid problems like this it to bit for bit clone the old parity drive onto the new drive, shut down the machine, remove the old parity drive, and start the machine.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 6.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • I used rsync -av to clone the content.


    Alternatively i could just wipe the disk and recreate the parity i guess?

    But what is strange to me is that Snapraid still tries to access the old parity drive even though it is not part of the array anymore. So for me this looks like the issue is not the files or filesystem itself but some issue in the current Snapraid state/ configuration.

  • When you do what you did, the new disk has a filesystem UUID and disk label that are different than those of the old disk. This information appears in several places other than just the snapraid.conf file.


    I have been growing my array the way I described for years, replacing almost a dozen disks along the way, one by one, and never had a problem.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 6.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Having a look at the config file generated by OMV, the old parity drive does not exist anymore. So I really wonder why Snapraid is still trying to access it (I have rebooted by the way). There has to be a reference elsewhere.


    Here is the config file:



    The new parity drive has no label. Is the label maybe what is referenced to as snapraid "name" ??

  • When you do what you did, the new disk has a filesystem UUID and disk label that are different than those of the old disk. This information appears in several places other than just the snapraid.conf file.


    I have been growing my array the way I described for years, replacing almost a dozen disks along the way, one by one, and never had a problem.

    Yes ok that explains it. I thought that it would be fine to just introduce a new disk under a new label. None of the documentations i saw mentioned that it would be required to clone the old disk in a way that also the UUID is taken over.

  • Now whats funny is that when i check the file /etc/snapraid.conf the content is different to what i can see in the OMV GUI.

    Either i am looking at the wrong file or OMV did not update the snapraid config:



    Also tried rebooting by the way. Config file stays untouched.

  • So i changed the snapraid.conf file to reflect the new drive and it does have direct effect.It is now picking the correct drive.


    So the question for now is: why did OMV not udate the snapraid.conf file as it states in the OMV GUI? I did not see any hint in the GUI that this did not work due to whatever reason,

  • I dunno what the problem is, and the way I have been growing things here it's foolproof and completely problem free.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 6.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Any suggestion on how i could get to the root cause?

    It's not worth it as you'd re-implement what the GUI already does. OMV isn't stateless so if you decide to change state without OMV to observe, you have to change everything that matches the last state OMV observed. I suggest you create a little switcher-roo script in $HOME and use that. It's like gderf said, cloning the drive is the best way here.

    • New
    • Official Post

    Any suggestion on how i could get to the root cause?

    Two things...

    How did you change the web interface to point at the new drive?

    What is the output of: sudo omv-salt deploy run snapraid

    omv 6.9.0-1 Shaitan | 64 bit | 6.2 proxmox kernel

    plugins :: omvextrasorg 6.3.1 | kvm 6.2.16 | compose 6.10.3 | cputemp 6.1.3 | mergerfs 6.3.7


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I am now cloning the old disk to the new via Clonezilla as suggested by gderf

    I will then have a look if that has worked out tomorrow.


    After the new Parity is in place, I will investigate why OMV is not updating the snapraid.conf with @ryecoaaron's suggestion.


    Thanks guys!

  • Two things...

    How did you change the web interface to point at the new drive?

    What is the output of: sudo omv-salt deploy run snapraid

    1. did "rsync -av" from old to new parity disk

    2. removed the parity drive from the Snapraid Array in OMV GUI by editing the array

    3. added the new drive to the array by editing the array in OMV GUI


    This seemed to generate a valid snapraid.conf in GUI. But the config shown in GUI was never brought into snapraid live operation by OMV.

    • New
    • Official Post

    This seemed to generate a valid snapraid.conf in GUI. But the config shown in GUI was never brought into snapraid live operation by OMV.

    So, you never had to click the apply banner? Did you run the omv-salt deploy run snapraid command?

    omv 6.9.0-1 Shaitan | 64 bit | 6.2 proxmox kernel

    plugins :: omvextrasorg 6.3.1 | kvm 6.2.16 | compose 6.10.3 | cputemp 6.1.3 | mergerfs 6.3.7


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • New
    • Official Post

    Did you run the omv-salt command?

    omv 6.9.0-1 Shaitan | 64 bit | 6.2 proxmox kernel

    plugins :: omvextrasorg 6.3.1 | kvm 6.2.16 | compose 6.10.3 | cputemp 6.1.3 | mergerfs 6.3.7


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!