Cloned a Data Drive, Now trying to expand it

  • I wanted to have more space in my setup so I bought an 18 TB drive that replaced a 6 TB drive. NOTE: This is NOT my OS Drive. I did this by using a cloning device. This went very well - everything came up without an issue when I swapped in the new drive. As you might expect, the system reports the drive at being 16+ TB, but if I look at any of the folder metrics, they still base the available space on the 6TB drive.


    I've done a lot of searching on this, and I am having a hard time seeing what is true. Most people that did this were trying to swap out the OS drive and obviously that has different challenges. Others were using proxmox. GParted seems like the way to go, but I've seen numerous posts that appear to be conflicting. Is it possible to do this? If someone could point me to the correct guide or give me some truths, I would really appreciate it! I am not a linux expert but I am comfortable going in there and poking around.

  • muchgooder

    Set the Label from OMV 4.x to OMV 5.x
  • gderf, thank you again! I may have gone into the "google is your friend, until it isn't" territory.


    I was seeing this message:


    So I did some looking around, and it looked like I needed to resize the table. So I ran this command:

    Code
    sudo sgdisk -e /dev/sdX


    And then I rebooted. When the server came back up, I found that at least some of the drives had different device IDs (the drive I am trying to grow changed from sde to sdb). I am going to guess that this is a bad thing? For example, my docker is no longer starting. I am getting this message:


    For some reason the Docker directory is now being pointed at "-Backup" instead of "-Docker" (I went into the directory to verify that it should be -Docker). If I try to correct it in the UI and save it, I get this message.



  • Having disks change device IDs is not unusual this is why they should not be used to point to things.


    I don't know why your docker storage path changed.


    I don't have any disks here larger than 14TB so I haven't run into any problems growing partitions and filesystems.


    What led you to think you needed to resize the table?


    Since the disk you are working on is a clone and you still have the original drive why don't you just try growing the partition and then the filesystem?


    Edit: You shouldn't have both the original disk and the clone in the machine at the same time because they share a common filesystem UUID and this is ambiguous. If you need to continue using the original disk in the machine with the new disk you should change the filesystem UUID on the original disk.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    Edited once, last by gderf ().


  • Thanks again! Just to be clear, at no point have I ever had both drives in the server at the same time. When I looked at the drive sizes in linux, there was a red message above that drive (it said something about write fixing a problem). I asked copilot, and... well, here I am.


    If you look back a couple of messages ago, I got a weird message when running growpart. It looks like it says it couldn't do anything?

    More importantly, do you have any idea as to how I can fix the docker path? It won't let me save the change (error message is in my previous post).

  • Personally, I never try to grow partitions and modify filesystems unless I have to. In your situation, I would have just created a filesystem on the new disk, rsynced the data from the old drive to the new one, and then either edited any settings that reference it, by changing them to the new uuid or changing the uuid of the drive as gderf mentioned.


    Ultimately you end up with the same result and it takes about the same amount of time, but a glitch during an rsync is fixable by re-running the rsync, where it will only re-copy files that are different or missing, but a glitch during a partition/filesystem manipulation can leave in a situation that is not easily recoverable or possibly not recoverable at all, and require you to wipe the new drive and start from the beginning again.

  • If you look back a couple of messages ago, I got a weird message when running growpart. It looks like it says it couldn't do anything?


    More importantly, do you have any idea as to how I can fix the docker path? It won't let me save the change (error message is in my previous post).

    What I saw was that you ran growpart with the --dry-run option. What were you expecting when using that option?


    Posting screenshots of errors and such is not helpful because they can't be fully read. You should copy the text to the clipboard and post that instead.


    Not sure why you can't fix the docker storage path and more importantly having it changed without your input is concerning.


    I see you are using things like /srv/dev/disk-by-label- in that docker storage path. Nothing really wrong with that but it is no longer considered standard usage.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Personally, I never try to grow partitions and modify filesystems unless I have to. In your situation, I would have just created a filesystem on the new disk, rsynced the data from the old drive to the new one, and then either edited any settings that reference it, by changing them to the new uuid or changing the uuid of the drive as gderf mentioned.


    Ultimately you end up with the same result and it takes about the same amount of time, but a glitch during an rsync is fixable by re-running the rsync, where it will only re-copy files that are different or missing, but a glitch during a partition/filesystem manipulation can leave in a situation that is not easily recoverable or possibly not recoverable at all, and require you to wipe the new drive and start from the beginning again.

    EDIT: I think I might have a bad drive. The drive that hosts -Docker is red, and most times brings up a "timeout" when I try to bring up the device information.


    *******************************************************************

    The message with growpart was the same with or without the dryrun flag. Once I saw the errors, I started looking at flags while using dryrun. I should have indicated this.


    Similarly, I spent 20 minutes trying to copy text from the modals. I didn't realize that the "show details" would allow for easy copying.


    So I did have a little breakthrough this morning. I ran a "dist-update" and my docker/portainer environment popped up without me making any other changes (it was still running on -Backup path instead of -Docker path). I rebooted, and then was back to weirdness. Docker was still running, but Portainer was not installed. I installed portainer and then an empty environment came up.


    If I try to change the Docker path to have -Docker again, I get this error:


    Error #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color omvextras 2>&1' with exit code '1': HAL.BEDELL:
    ---------- ID: omvextrasbaserepo Function: pkgrepo.managed Name: deb https://openmediavault-plugin-…github.io/packages/debian usul main Result: True Comment: Package repo 'deb https://openmediavault-plugin-…github.io/packages/debian usul main' already configured Started: 09:47:33.648213 Duration: 137.524 ms Changes:
    ---------- ID: deb https://openmediavault-plugin-…github.io/packages/debian usul-testing main Function: pkgrepo.absent Result: True Comment: Package repo deb https://openmediavault-plugin-…github.io/packages/debian usul-testing main is absent Started: 09:47:33.785926 Duration: 64.986 ms Changes:
    ---------- ID: deb https://openmediavault-plugin-…github.io/packages/debian usul-extras main Function: pkgrepo.managed Result: True Comment: Package repo 'deb https://openmediavault-plugin-…github.io/packages/debian usul-extras main' already configured Started: 09:47:33.851051 Duration: 187.302 ms Changes:
    ---------- ID: deb [arch=amd64] https://download.docker.com/linux/debian buster stable Function: pkgrepo.managed Result: True Comment: Package repo 'deb [arch=amd64] https://download.docker.com/linux/debian buster stable' already configured Started: 09:47:34.038579 Duration: 126.072 ms Changes:
    ---------- ID: deb http://linux.teamviewer.com/deb stable main Function: pkgrepo.managed Result: True Comment: Package repo 'deb http://linux.teamviewer.com/deb stable main' already configured Started: 09:47:34.164795 Duration: 128.084 ms Changes:
    ---------- ID: configure_apt_pref_omvextras Function: file.managed Name: /etc/apt/preferences.d/omvextras.pref Result: True Comment: File /etc/apt/preferences.d/omvextras.pref is in the correct state Started: 09:47:34.295201 Duration: 51.002 ms Changes:
    ---------- ID: refresh_database_apt Function: module.run Result: False Comment: An exception occurred in this state: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/salt/state.py", line 2172, in call *cdata["args"], **cdata["kwargs"] File "/usr/lib/python3/dist-packages/salt/loader.py", line 1235, in __call__ return self.loader.run(run_func, *args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader.py", line 2268, in run return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader.py", line 2283, in _run_as return _func_or_method(*args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader.py", line 2316, in wrapper return f(*args, **kwargs) File "/usr/lib/python3/dist-packages/salt/utils/decorators/__init__.py", line 746, in _decorate return self._call_function(kwargs) File "/usr/lib/python3/dist-packages/salt/utils/decorators/__init__.py", line 377, in _call_function six.reraise(*sys.exc_info()) File "/usr/lib/python3/dist-packages/salt/ext/six.py", line 693, in reraise raise value File "/usr/lib/python3/dist-packages/salt/utils/decorators/__init__.py", line 360, in _call_function return self._function(*args, **kwargs) File "/usr/lib/python3/dist-packages/salt/states/module.py", line 428, in run _func, returner=kwargs.get("returner"), func_args=kwargs.get(func) File "/usr/lib/python3/dist-packages/salt/states/module.py", line 473, in _call_function mret = salt.utils.functools.call_function(__salt__[name], *func_args, **func_kwargs) File "/usr/lib/python3/dist-packages/salt/utils/functools.py", line 159, in call_function return salt_function(*function_args, **function_kwargs) File "/usr/lib/python3/dist-packages/salt/loader.py", line 1235, in __call__ return self.loader.run(run_func, *args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader.py", line 2268, in run return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader.py", line 2283, in _run_as return _func_or_method(*args, **kwargs) File "/usr/lib/python3/dist-packages/salt/modules/aptpkg.py", line 406, in refresh_db raise CommandExecutionError(comment) salt.exceptions.CommandExecutionError: E: The repository 'http://httpredir.debian.org/debian buster-backports Release' does not have a Release file. Started: 09:47:34.346970 Duration: 5355.024 ms Changes:

    Summary for HAL.BEDELL
    ------------
    Succeeded: 6
    Failed: 1
    ------------
    Total states run: 7
    Total run time: 6.050 s in /usr/share/php/openmediavault/system/process.inc:196
    Stack trace:
    #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute()
    #1 /usr/share/openmediavault/engined/rpc/config.inc(167): OMV\Engine\Module\ServiceAbstract->deploy()
    #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)
    #5 /usr/share/openmediavault/engined/rpc/omvextras.inc(180): OMV\Rpc\Rpc::call('Config', 'applyChanges', Array, Array)
    #6 [internal function]: OMVRpcServiceOmvExtras->setDocker(Array, Array)
    #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #8 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('setDocker', Array, Array)
    #9 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('OmvExtras', 'setDocker', Array, Array, 1)
    #10 {main}




    Here is a snapshot of the system logs

  • I am thinking that my Docker drive might be bad and that is the cause of the issue where I can't change the docker path..

  • Sorry for the numerous posts today.


    - I have a new Docker drive on order

    - I suspect that there is another issue somewhere else. I pasted an error in earlier - I am seeing similar errors when I try to perform other operations. unrelated to Docker or that failed drive. I noticed that people upgrading from 5 to 6 had a similar error, and the root cause was the version of Debian no longer in a release state. The recommendation was to disable the extras repo and backports. I tried to do this in the UI but got an error. Do you recommend that I disable these via command line? Or do you think the error is somewhere else.

  • Sorry, I can't help you with upgrade 5->6 problems. It may be very difficult or impossible to do this as OMV5 went end of life quite some time ago and its repos may be shuttered.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Sorry, I can't help you with upgrade 5->6 problems. It may be very difficult or impossible to do this as OMV5 went end of life quite some time ago and its repos may be shuttered.

    I think you misread my post. I am not trying to upgrade from 5 to 6. When I perform various operations, I am presented with the big ugly error that I referenced earlier, and that error came up in posts where people were trying to upgrade from 5 to 6 (which I am not). It seems to be referencing OMV Extras.

    • Official Post

    I am not trying to upgrade from 5 to 6

    But you should be upgrading from 5 to 6 to 7.


    When I perform various operations, I am presented with the big ugly error that I referenced earlier, and that error came up in posts where people were trying to upgrade from 5 to 6 (which I am not). It seems to be referencing OMV Extras.

    I will help you fix the repo to upgrade not so you can stay on 5. I struggle to remember how omv-extras 5 works even though I wrote the code.


    What is the output of: sudo omv-aptclean

    omv 7.4.7-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.3 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • But you should be upgrading from 5 to 6 to 7.


    I will help you fix the repo to upgrade not so you can stay on 5. I struggle to remember how omv-extras 5 works even though I wrote the code.


    What is the output of: sudo omv-aptclean

    Thank you so much! Just to be clear, i don't know if that repo is actually causing an issue. I just know that when go to do a variety of operations that I am greated with ugly errors that seem to reference the repo. I've been waiting for a sunny day to migrate... it just hasn't happened yet. I am also a developer and usually like to adhere to the "if it ain't broke" philosophy, but I know the downside of that with OMV is that people will (understandably) not want to help with older versions.

    Here is the output. I capture the bottom of the output as everything was fine until then. Again, I have no idea if this is causing any other issues. I know that my docker drive is bad and that will be replaced with a new drive tomorrow.

    • Official Post

    I've been waiting for a sunny day to migrate... it just hasn't happened yet. I am also a developer and usually like to adhere to the "if it ain't broke" philosophy

    Most devs I know want bleeding edge everything lol.


    What is the output of: sudo omv-changebackports NO

    omv 7.4.7-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.3 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Most devs I know want bleeding edge everything lol.


    What is the output of: sudo omv-changebackports NO

    Trust me, I am that guy. But I became a data scientist last year and all of my free time has been spent ramping up (and keeping up) in that area. I kind of left this alone because my home runs on docker apps and everyone was happy.


    Anyway, that command ran with no issues.

    • Official Post

    Based on your output, the repo problem should be fixed now. Are you still seeing the other issue?

    omv 7.4.7-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.3 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!