SNAPRAID+AUFS Replacing disks

    • OMV 2.x
    • Resolved

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • SNAPRAID+AUFS Replacing disks

      I currently have two data disk that are so old that need replacement and the parity drive it's almost full. So I bought two disks, one 4Tb for parity and one 2tb for data.
      What I want is to replace the parity with the new 4TB, then replace the first data drive with the new 2tb and then recycle the "old" 2tb parity drive for the second old data drive.
      All of this under a AUFS pool.

      Any suggestions on the correct way of doing that?

      I'm currently replacing the parity drive. I attached the 4TB drive and added to the array as a new parity+content, and then launched a sync. As it complained about a totally empty content file y executed the suggested command to force sync from cli. then I suppose that I'll remove the old parity disk from the Snapraid gui and launch sync again.
    • The above procedure ended with a second parity drive with the parity file named something like parity-2.snapraid.
      What I noticed is that in the old parity drive there's a second parity file with a different name.

      There's no one in the forum that knows how to replace snapraid disks on Openmediavault?

      The post was edited 1 time, last by Sergio ().

    • Sergio wrote:

      There's no one in the forum that knows how to replace snapraid disks on Openmediavault?

      You didn't give me much time to answer...

      If you are going to replace a parity drive, disable the old drive (uncheck parity and content), enable the new one (check parity and content), and run Sync. If you leave the old one enabled, of course snapraid is going to think you want two drive parity. Do the same thing with the data drive and then Fix, Check, Sync in that order. This is in the snapraid manual under the Recoving section.
      omv 4.1.6 arrakis | 64 bit | 4.16 backports kernel | omvextrasorg 4.1.7
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!
    • Ahahah... Time zones rules. Hello Aaron.

      I removed the 1st parity drive, deleted the parity-2.snapraid on the 2nd parity drive and now I'm syncing.

      For your instructions... I have one doubt. If I go to the old drive I want to replace and uncheck the "data", what I'm supposed to do with the new drive? Also, that drive contains the shared folder for AUFS.
      It shouldn't be better to clone all the data in the new drive and then replace the uuid in the snapraid file? I suppose that for AUFS I should remove first the D1 share entry point and then share the one on the new drive. But I don't know what to do with the "storage" share.
      Can I place this on another drive?

      As for the snapraid manual... doesn't the plugin allow disk replacement? I'm forced to do cli commands
    • Personally, if the old drive hasn't failed, I would (this would work with any snapraid and/or aufs drive)
      - turn system off
      - install the new drive
      - boot clonezilla
      - clone old drive to the new drive
      - turn system off
      - remove old drive
      - boot gparted-live
      - expand the filesystem on new drive to use all of the space
      - reboot into OMV. no changes to snapraid needed because of OMV using uuids (it thinks the same drive is there)

      You don't have to do any cli commmands. There is a button for each of the commands I listed.
      omv 4.1.6 arrakis | 64 bit | 4.16 backports kernel | omvextrasorg 4.1.7
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      Personally, if the old drive hasn't failed, I would (this would work with any snapraid and/or aufs drive)
      - turn system off
      - install the new drive
      - boot clonezilla
      - clone old drive to the new drive
      - turn system off
      - remove old drive
      - boot gparted-live
      - expand the filesystem on new drive to use all of the space
      - reboot into OMV. no changes to snapraid needed because of OMV using uuids (it thinks the same drive is there)


      Men, you'r the guy. I'm currently using this procedure to recover an old W2003 server RAID that crashed, and never thought applying that in my house. I use partedmagic, so all tools are already in the same package. The only problem could me missing the target drive.
    • So for recovering a lost data drive, the procedure should be replacing the disk and then directly doing a Fix. The only requirement should be naming the device the same as the old. And it should even recover the AUFS stuff, right?
      What I don't understand is how one could remove the old drive (by only unchecking) and then creating the same name in the plugin gui.
    • aufs "stuff" is just files. So, it should recover them but you really don't need any aufs files.

      I described that wrong. Use the delete button in the snapraid gui. Then add the new drive using the same name. You can always try different methods in a VM instead of relying on someone who doesn't use the plugin :)
      omv 4.1.6 arrakis | 64 bit | 4.16 backports kernel | omvextrasorg 4.1.7
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!
    • <X :evil: Ahah... I wouldn't ever rely on you!

      I'll do the clone way and always keeping the old disk, but in case of a disk failure, I wouldn't had any options.
      It should be complete procedure instructions there in the OMV forum, for those things.

      And I guess disk replacement could be automated in the snapraid plugin. Mmmm...
    • Sergio wrote:

      It should be complete procedure instructions there in the OMV forum, for those things.

      Feel free to write them. I don't use the plugin and don't have the time.

      Sergio wrote:

      And I guess disk replacement could be automated in the snapraid plugin. Mmmm...

      If someone writes the code, I will put it in the plugin.
      omv 4.1.6 arrakis | 64 bit | 4.16 backports kernel | omvextrasorg 4.1.7
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      - expand the filesystem on new drive to use all of the space
      - reboot into OMV. no changes to snapraid needed because of OMV using uuids (it thinks the same drive is there)


      Aaron, I've done like this, but OMV still doesn't detect the new FS size. In disks, it correctly shows the disk as 1.82TB, but in filesystems it's showing data from the old disk aka 931.06GB.
      Is there something that can be done before deleting the fs and going the Fix method?
    • The new disk is SDH

      Source Code

      1. df -h
      2. S.ficheros Tamaño Usados Disp Uso% Montado en
      3. rootfs 104G 9,5G 90G 10% /
      4. udev 10M 0 10M 0% /dev
      5. tmpfs 3,2G 572K 3,2G 1% /run
      6. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /
      7. tmpfs 5,0M 0 5,0M 0% /run/lock
      8. tmpfs 6,3G 0 6,3G 0% /run/shm
      9. Z2 1,2T 724G 423G 64% /Z2
      10. tmpfs 16G 2,8M 16G 1% /tmp
      11. /dev/sdh1 932G 688G 244G 74% /media/12f7f110-46e2-4b4f-a60f-f60bb8e3161e
      12. /dev/sdi1 1,4T 460G 938G 33% /media/b39ec7aa-eb5c-4dbd-9a51-db38600dff9f
      13. /dev/sdg1 1,9T 923G 941G 50% /media/38488698-c665-4b76-a7fb-65dd305a3ac4
      14. /dev/sdj1 3,6T 925G 2,7T 26% /media/fc9186ce-fb4f-4e1f-8458-0f7490ea5645
      15. none 4,1T 2,1T 2,1T 50% /media/storage
      16. none 4,1T 2,1T 2,1T 50% /media/38488698-c665-4b76-a7fb-65dd305a3ac4/poolshare
      17. Z2 1,2T 724G 423G 64% /var/lib/docker/openmediavault
      18. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/log
      19. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/tmp
      20. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/spool
      21. folder2ram 16G 120M 16G 1% /var/log
      22. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/lib/php5
      23. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/lib/openmediavault/rrd
      24. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/lib/rrdcached
      25. folder2ram 16G 916K 16G 1% /var/lib/openmediavault/rrd
      26. folder2ram 16G 4,0K 16G 1% /var/lib/php5
      27. folder2ram 16G 62M 16G 1% /var/lib/rrdcached
      28. folder2ram 16G 444K 16G 1% /var/spool
      29. folder2ram 16G 0 16G 0% /var/tmp
      30. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/lib/monit
      31. folder2ram 16G 8,0K 16G 1% /var/lib/monit
      32. cgroup 16G 0 16G 0% /sys/fs/cgroup
      33. Z2 1,2T 724G 423G 64% /var/lib/docker/openmediavault/aufs
      Display All
    • Doesn't look like the filesystem resized. What is the output of: xfs_growfs /media/12f7f110-46e2-4b4f-a60f-f60bb8e3161e
      omv 4.1.6 arrakis | 64 bit | 4.16 backports kernel | omvextrasorg 4.1.7
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!
    • Source Code

      1. xfs_growfs /media/12f7f110-46e2-4b4f-a60f-f60bb8e3161e
      2. meta-data=/dev/sdh1 isize=256 agcount=4, agsize=61047597 blks
      3. = sectsz=512 attr=2
      4. data = bsize=4096 blocks=244190385, imaxpct=25
      5. = sunit=0 swidth=0 blks
      6. naming =version 2 bsize=4096 ascii-ci=0
      7. log =internal bsize=4096 blocks=119233, version=2
      8. = sectsz=512 sunit=0 blks, lazy-count=1
      9. realtime =none extsz=4096 blocks=0, rtextents=0


      And I can't understand anything about this. :D
    • Source Code

      1. ​df -h
      2. S.ficheros Tamaño Usados Disp Uso% Montado en
      3. rootfs 104G 9,5G 90G 10% /
      4. udev 10M 0 10M 0% /dev
      5. tmpfs 3,2G 572K 3,2G 1% /run
      6. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /
      7. tmpfs 5,0M 0 5,0M 0% /run/lock
      8. tmpfs 6,3G 0 6,3G 0% /run/shm
      9. Z2 1,2T 724G 423G 64% /Z2
      10. tmpfs 16G 2,8M 16G 1% /tmp
      11. /dev/sdh1 932G 688G 244G 74% /media/12f7f110-46e2-4b4f-a60f-f60bb8e3161e
      12. /dev/sdi1 1,4T 460G 938G 33% /media/b39ec7aa-eb5c-4dbd-9a51-db38600dff9f
      13. /dev/sdg1 1,9T 923G 941G 50% /media/38488698-c665-4b76-a7fb-65dd305a3ac4
      14. /dev/sdj1 3,6T 925G 2,7T 26% /media/fc9186ce-fb4f-4e1f-8458-0f7490ea5645
      15. none 4,1T 2,1T 2,1T 50% /media/storage
      16. none 4,1T 2,1T 2,1T 50% /media/38488698-c665-4b76-a7fb-65dd305a3ac4/poolshare
      17. Z2 1,2T 724G 423G 64% /var/lib/docker/openmediavault
      18. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/log
      19. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/tmp
      20. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/spool
      21. folder2ram 16G 120M 16G 1% /var/log
      22. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/lib/php5
      23. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/lib/openmediavault/rrd
      24. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/lib/rrdcached
      25. folder2ram 16G 908K 16G 1% /var/lib/openmediavault/rrd
      26. folder2ram 16G 4,0K 16G 1% /var/lib/php5
      27. folder2ram 16G 61M 16G 1% /var/lib/rrdcached
      28. folder2ram 16G 444K 16G 1% /var/spool
      29. folder2ram 16G 0 16G 0% /var/tmp
      30. /dev/disk/by-uuid/4c3d7991-c1c7-4dc8-9dd4-9f86fa92c72b 104G 9,5G 90G 10% /var/folder2ram/var/lib/monit
      31. folder2ram 16G 8,0K 16G 1% /var/lib/monit
      32. cgroup 16G 0 16G 0% /sys/fs/cgroup
      33. Z2 1,2T 724G 423G 64% /var/lib/docker/openmediavault/aufs
      Display All