How to upgrade my main data drive with minimal downtime

    • OMV 4.x
    • How to upgrade my main data drive with minimal downtime

      Apologies if this is a question that's been addressed before. It's likely a common one, but I didn't find anything that quite covered the base I'm looking for here.

      My OMV setup is pretty simple:

      1x 240GB SSD with two partitions (one for the OS, one for virtual machines and Docker config data)
      1x 4TB HDD for data (it does have Docker config data for one container--NextCloud)
      1x 4TB HDD for periodic backups of the data drive

      The data drive is running low on space, so I'm upgrading to a bigger one. An ideal approach would be one where I can just drop the new drive in and have OMV see it as if it's just the same old disk (but bigger), without having to recreate all the shares, etc. Would something as simple as the following approach do that for me?

      1) Boot into Clonezilla and image the old data drive onto the new one
      2) Shutdown
      3) Remove the old (4TB) data drive
      4) Boot into OMV

      I imagine one key to all this is confirming that cloning this way would also give the new drive the old UUID. Is that the case? If not, I'm guessing I'll need to fix that before the first boot into OMV with the new drive, in order to keep the OS from ringing alarm bells that the drive it expects to see is no longer there. But I'm not sure how to do that.

      Any confirmations, advice, or feedback on the best way to handle this would be much appreciated! Thanks in advance!
    • Thanks! This is all helpful feedback.

      Rebuilding the shares would be a minor, very manageable, hassle. The biggest issue is the NextcloudPi container that's on the drive. I've had to recreate it at least 7 or 8 times, and am tired of dealing with it. (It's a long story, with lots of random issues getting it up and running.) I'm worried that if I boot up without all the shares in place, it could break that container, and have me redoing it yet again.
    • Doing a df -h, it looks like they're referenced by label:


      Source Code

      1. /dev/sdc1 3.6T 3.2T 477G 87% /sharedfolders/mirrorRoot
      2. /dev/sdb1 3.6T 3.2T 480G 87% /export/nasRoot
      I don't have any network shares (SAMBA, NFS, etc.) on the backup drive (sdc1), and I do have NFS (and SAMBA) shares on sdb1, which I assume explains the different look of the reference paths.

      This is teaching me a lot. I'd just assumed that since UUID was referenced by OMV's config.xml file, that must be the way it's managed.
    • For anybody who may come across this thread in the future...

      I ended up taking the safe route, and cloned the drive with CloneZilla. It took about 23 hours, but after it finished, I just shut down the server, disconnected the old hard drive, and booted into OMV. So far it looks to have succeeded as a drop-in replacement.
    • Thanks, @tkaiser!

      Hmm... I'm not sure if this is a red flag and/or something to be concerned about, but the only entry that has a /srv entry is the data partition of the OS disk. For full context, here's the entire df -h output:

      Source Code

      1. Filesystem Size Used Avail Use% Mounted on
      2. udev 7.8G 0 7.8G 0% /dev
      3. tmpfs 1.6G 19M 1.6G 2% /run
      4. /dev/sda1 31G 7.6G 22G 26% /
      5. tmpfs 7.8G 0 7.8G 0% /dev/shm
      6. tmpfs 5.0M 0 5.0M 0% /run/lock
      7. tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
      8. tmpfs 7.8G 8.0K 7.8G 1% /tmp
      9. /dev/sda3 172G 39G 125G 24% /srv/dev-disk-by-label-data
      10. /dev/sdb1 7.2T 3.2T 4.1T 44% /export/nasRoot
      11. /dev/sdc1 3.6T 3.2T 477G 87% /sharedfolders/mirrorRoot
      12. folder2ram 7.8G 46M 7.8G 1% /var/log
      13. folder2ram 7.8G 0 7.8G 0% /var/tmp
      14. folder2ram 7.8G 1.1M 7.8G 1% /var/lib/openmediavault/rrd
      15. folder2ram 7.8G 16K 7.8G 1% /var/spool
      16. folder2ram 7.8G 16M 7.8G 1% /var/lib/rrdcached
      17. folder2ram 7.8G 12K 7.8G 1% /var/lib/monit
      18. folder2ram 7.8G 0 7.8G 0% /var/lib/php
      19. folder2ram 7.8G 0 7.8G 0% /var/lib/netatalk/CNID
      20. folder2ram 7.8G 656K 7.8G 1% /var/cache/samba
      21. overlay 31G 7.6G 22G 26% /var/lib/docker/overlay2/ab3e742d50138b63db7f0ec37b2d0bef94b8ee639f1766420450f6a158880ec9/merged
      22. overlay 31G 7.6G 22G 26% /var/lib/docker/overlay2/2fff907a0bb4b3be116f105618510196faa2ed9ebfd540d9536284395b387a43/merged
      23. overlay 31G 7.6G 22G 26% /var/lib/docker/overlay2/32dc0cc646fabae86999044069a9cf16e23ba8ea186895fa5243adec5570feba/merged
      24. overlay 31G 7.6G 22G 26% /var/lib/docker/overlay2/07354eb40ac54996c5d1ecf6d8b1c3b2f636e85adcd3f6a8becff87cb06b0e2b/merged
      25. overlay 31G 7.6G 22G 26% /var/lib/docker/overlay2/2ac89ebe6262dd1454291592fc0c42513e39884ec4bed0996f144c3a76a5d3a5/merged
      26. overlay 31G 7.6G 22G 26% /var/lib/docker/overlay2/7e3e45fdd737174e9aa982fb9bcb055be6ddb6b6b520e29e314395d7ddc35d80/merged
      27. overlay 31G 7.6G 22G 26% /var/lib/docker/overlay2/f3f91cc005f0778ceaaa46515271baec86a450b260136b08db2c777f67740031/merged
      28. overlay 31G 7.6G 22G 26% /var/lib/docker/overlay2/22d91a97cf40d0313f596285fa9bf9738b827f1ea491afacc6ff108a34fbbce4/merged
      29. overlay 31G 7.6G 22G 26% /var/lib/docker/overlay2/5865b4b15fe014f997fb645cef7f38b7cbd606104d5075ce8bebf5c905eb45a0/merged
      30. shm 64M 4.0K 64M 1% /var/lib/docker/containers/08190e886398939afd73188bc6034d2f3616d5546f6257d9470ddae619721e3b/mounts/shm
      31. shm 64M 0 64M 0% /var/lib/docker/containers/824ad8fe6e5eb6805d83e25add8e4c2d4c99676d893cb9b61c2ccd6e888a4677/mounts/shm
      32. shm 64M 0 64M 0% /var/lib/docker/containers/ce917af49ef7ba3c56ea1e2a289880d778a841a261f202aa2de2f38baf14b9bc/mounts/shm
      33. shm 64M 0 64M 0% /var/lib/docker/containers/ad189e531b89a020c2e00f452bf20e171095e789e22bba18b737d4f20f50e74e/mounts/shm
      34. shm 64M 0 64M 0% /var/lib/docker/containers/c9f54bec3a9b1f964231e70eb96475891f39de9755f96184010d2bd09a9632d3/mounts/shm
      35. shm 64M 0 64M 0% /var/lib/docker/containers/1c0898f68f379adfcc984940482c04b038a31d6e2ce5db68c6d1d8b2fcadea53/mounts/shm
      36. shm 64M 0 64M 0% /var/lib/docker/containers/51e34cf79f46c2587cfaa7c270cf55081a575522e35e56d23098b41509d52884/mounts/shm
      37. shm 64M 4.0K 64M 1% /var/lib/docker/containers/0d60ebfee7f1b73722fb84a559fcf965a4a2460f15a25f91c77a5fd3dbb9f50e/mounts/shm
      38. shm 64M 0 64M 0% /var/lib/docker/containers/75e0db380e83d760fa9b1df98c539aa8452135c4885b1d5c202c5b76d2ac13c2/mounts/shm
      39. tmpfs 1.6G 0 1.6G 0% /run/user/0
      Display All
      For what it's worth, this is exactly what it looked like before upgrading the hard drive.

      One additional note: There aremtent entries in the config.xml file that show /srv paths for both the nas and mirror disks:

      Source Code

      1. <fsname>/dev/disk/by-label/nas</fsname>
      2. <dir>/srv/dev-disk-by-label-nas</dir>

      Source Code

      1. <fsname>/dev/disk/by-label/mirror</fsname>
      2. <dir>/srv/dev-disk-by-label-mirror</dir>
      I'm not sure if that's helpful detail, though.
    • Users Online 1

      1 Guest