SNAPRAID+AUFS Replacing disks

  • I currently have two data disk that are so old that need replacement and the parity drive it's almost full. So I bought two disks, one 4Tb for parity and one 2tb for data.
    What I want is to replace the parity with the new 4TB, then replace the first data drive with the new 2tb and then recycle the "old" 2tb parity drive for the second old data drive.
    All of this under a AUFS pool.


    Any suggestions on the correct way of doing that?


    I'm currently replacing the parity drive. I attached the 4TB drive and added to the array as a new parity+content, and then launched a sync. As it complained about a totally empty content file y executed the suggested command to force sync from cli. then I suppose that I'll remove the old parity disk from the Snapraid gui and launch sync again.

  • The above procedure ended with a second parity drive with the parity file named something like parity-2.snapraid.
    What I noticed is that in the old parity drive there's a second parity file with a different name.


    There's no one in the forum that knows how to replace snapraid disks on Openmediavault?

    • Offizieller Beitrag

    There's no one in the forum that knows how to replace snapraid disks on Openmediavault?


    You didn't give me much time to answer...


    If you are going to replace a parity drive, disable the old drive (uncheck parity and content), enable the new one (check parity and content), and run Sync. If you leave the old one enabled, of course snapraid is going to think you want two drive parity. Do the same thing with the data drive and then Fix, Check, Sync in that order. This is in the snapraid manual under the Recoving section.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Ahahah... Time zones rules. Hello Aaron.


    I removed the 1st parity drive, deleted the parity-2.snapraid on the 2nd parity drive and now I'm syncing.


    For your instructions... I have one doubt. If I go to the old drive I want to replace and uncheck the "data", what I'm supposed to do with the new drive? Also, that drive contains the shared folder for AUFS.
    It shouldn't be better to clone all the data in the new drive and then replace the uuid in the snapraid file? I suppose that for AUFS I should remove first the D1 share entry point and then share the one on the new drive. But I don't know what to do with the "storage" share.
    Can I place this on another drive?


    As for the snapraid manual... doesn't the plugin allow disk replacement? I'm forced to do cli commands

    • Offizieller Beitrag

    Personally, if the old drive hasn't failed, I would (this would work with any snapraid and/or aufs drive)
    - turn system off
    - install the new drive
    - boot clonezilla
    - clone old drive to the new drive
    - turn system off
    - remove old drive
    - boot gparted-live
    - expand the filesystem on new drive to use all of the space
    - reboot into OMV. no changes to snapraid needed because of OMV using uuids (it thinks the same drive is there)


    You don't have to do any cli commmands. There is a button for each of the commands I listed.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!


  • Men, you'r the guy. I'm currently using this procedure to recover an old W2003 server RAID that crashed, and never thought applying that in my house. I use partedmagic, so all tools are already in the same package. The only problem could me missing the target drive.

  • So for recovering a lost data drive, the procedure should be replacing the disk and then directly doing a Fix. The only requirement should be naming the device the same as the old. And it should even recover the AUFS stuff, right?
    What I don't understand is how one could remove the old drive (by only unchecking) and then creating the same name in the plugin gui.

    • Offizieller Beitrag

    aufs "stuff" is just files. So, it should recover them but you really don't need any aufs files.


    I described that wrong. Use the delete button in the snapraid gui. Then add the new drive using the same name. You can always try different methods in a VM instead of relying on someone who doesn't use the plugin :)

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • <X:evil: Ahah... I wouldn't ever rely on you!


    I'll do the clone way and always keeping the old disk, but in case of a disk failure, I wouldn't had any options.
    It should be complete procedure instructions there in the OMV forum, for those things.


    And I guess disk replacement could be automated in the snapraid plugin. Mmmm...

    • Offizieller Beitrag

    It should be complete procedure instructions there in the OMV forum, for those things.


    Feel free to write them. I don't use the plugin and don't have the time.


    And I guess disk replacement could be automated in the snapraid plugin. Mmmm...


    If someone writes the code, I will put it in the plugin.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • - expand the filesystem on new drive to use all of the space
    - reboot into OMV. no changes to snapraid needed because of OMV using uuids (it thinks the same drive is there)


    Aaron, I've done like this, but OMV still doesn't detect the new FS size. In disks, it correctly shows the disk as 1.82TB, but in filesystems it's showing data from the old disk aka 931.06GB.
    Is there something that can be done before deleting the fs and going the Fix method?

    • Offizieller Beitrag

    Did you expand the filesystem?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    What is the output of: df -h

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The new disk is SDH


    • Offizieller Beitrag

    Doesn't look like the filesystem resized. What is the output of: xfs_growfs /media/12f7f110-46e2-4b4f-a60f-f60bb8e3161e

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    xfs_growfs /media/12f7f110-46e2-4b4f-a60f-f60bb8e3161e
    meta-data=/dev/sdh1              isize=256    agcount=4, agsize=61047597 blks
             =                       sectsz=512   attr=2
    data     =                       bsize=4096   blocks=244190385, imaxpct=25
             =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0
    log      =internal               bsize=4096   blocks=119233, version=2
             =                       sectsz=512   sunit=0 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0


    And I can't understand anything about this. :D

    • Offizieller Beitrag

    What is the output of df -h now?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!