Okay, I'm using https://phoenixnap.com/kb/linux-create-partition as my guide, but now I have sdb as 8TB and don't know how to make an sdb2 to put the other 8TB on?
Error mounting new disk - salt, I think?
-
- OMV 5.x
- SoMuchLasagna
-
-
You should be making sdb1 and sdb2
-
Yeah, trying to use gdisk (guide here, from @Soma) and I got sdb1 made at 8TB, but then sdb2 was like, super tiny. Not sure why.
EDIT: I'm dumb. Was starting sdb2 too soon, sector-wise. I now have 8TB sdb1 and 6.6TB sdb2.
Do I refresh OMV gui? Delete the File System that's 14TB and see if I can add these individually?
-
Yes delete the 14TB filesystem. Then try creating a new one and select the 8TB first and see if you can get it made and mounted.
-
-
I was expecting to see both in the drop down list.
-
Same.
Do I need to reboot or something?I already did. -
WHOA - rebooted and now the sda/sdb labels have CHANGED. My 16tb drive is now sda and one of my original drives is now sdb.
Is this normal? Why would that happen?
Also, going to try to add the newly named sda drive - when it formats, is it going to delete my partitions? OMV still looks like it's treating it like a single partition, 14TB sized.
-
Having drives relabeled is not surprising. But you better be very careful going forward. If possible I suggest disconnecting your other drives and have just the new one in place.
-
Well, formatting through OMV got rid of the two partitions. Back to just sda1 (checked via lsblk). Super annoying. And when I tried to mount, same error I was getting when it was sdb. I'm at a loss.
Am I going to have to like, reformat everything, reinstall OMV, lose everything/start over to get all three drives working together?
This is frustrating. I really appreciate all of your help, though.
-
Adding a new disk isn't going to mean starting all over. Those 16TB Exos drives are everywhere as is Debian and Debian based systems. I am out of ideas for now.
-
Ok, new day, fresh start.
Confirm what letter has been assigned to the drive. (I'll use X on the following lines)
lsblk -f
Do the same as post #23, to have the 2 partitions created: sdX1 8TiB && sdX2 6.6TiB.
fdisk -l /dev/sdX
lsblk -f | grep /dev/sdX
Run, after the partitions have been written (and fdisk as synced the disk)
partprobe
Reboot, and confirm that it's still the same as above. (the drive letter and the blkid or lsblk)
Now, instead of doing it in the GUI, format the partitions on the CLI: (I use this command instead)
mkfs.ext4 /dev/sdX1 Wait till it's finished (it takes some time)
mkfs.ext4 /dev/sdX2 Wait till it's finished (it takes some time)
Reboot again.
On the OMV gui, check the "Storage - > Disks" to see/confirm the drive is there.
Goto "Storage - > File Systems" and the 2 partitions should be there, with the type of FS visible but not mounted (of course)
Then, select one and click "Mount":
If all goes well, do the same on the 2nd partition.
And, do a reboot to see if all is OK.
You can then recheck the fstab/lsblk/blkid to confirm that the drive/partitions are configured.
Fingers crossed,
-
Used gdisk, created 8TB and 6.6TB partitions.
Screenshot attached for fdisk -l /dev/sda.
lsblk -f | grep /dev/sda doesn't return anything. Just gives me another blank line.
Partprobe also doesn't return anything in CLI.
After rebooting, the 16TB went back to sdb1, but I see sdb1 and sdb2.
--
I see both partitions in file systems, but when I try to mount sdb1, the same error appears.
Code
Alles anzeigenError #0: OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color fstab 2>&1' with exit code '1': debian: Data failed to compile: ---------- Rendering SLS 'base:omv.deploy.fstab.15mergerfsfolders' failed: Jinja error: 'xsystemd' Traceback (most recent call last): File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 497, in render_jinja_tmpl output = template.render(**decoded_context) File "/usr/lib/python3/dist-packages/jinja2/asyncsupport.py", line 76, in render return original_render(self, *args, **kwargs) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1008, in render return self.environment.handle_exception(exc_info, True) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 780, in handle_exception reraise(exc_type, exc_value, tb) File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 37, in reraise raise value.with_traceback(tb) File "<template>", line 25, in top-level template code File "/usr/lib/python3/dist-packages/jinja2/sandbox.py", line 385, in getattr value = getattr(obj, attribute) File "/usr/lib/python3/dist-packages/openmediavault/collectiontools.py", line 126, in __getitem__ return dict.__getitem__(self, key) KeyError: 'xsystemd' ; line 25 --- [...] {% for dir in branchDirs %} {% if dir | length > 2 %} {% set _ = branches.append(dir) %} {% if '*' not in dir %} {% set parent = salt['cmd.shell']('findmnt --noheadings --output TARGET --target ' + dir) %} {% if not pool.xsystemd %} <====================== {% if parent | length > 1 %} {% set _ = options.append('x-systemd.requires=' + parent) %} {% endif %} {% endif %} {% endif %} [...] --- in /usr/share/php/openmediavault/system/process.inc:195 Stack trace: #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute() #1 /usr/share/openmediavault/engined/rpc/config.inc(167): OMV\Engine\Module\ServiceAbstract->deploy() #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array) #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array) #5 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(917): OMV\Rpc\Rpc::call('Config', 'applyChanges', Array, Array) #6 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->mount(Array, Array) #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #8 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('mount', Array, Array) #9 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'mount', Array, Array, 1) #10 {main}
-
Really scratching my head here,
And this is weird:
Found a atari partition table in /dev/sdb2
Never saw this before.
Since you have the mergerFS running, it makes me think that it's messing with the drives (but again, not that experienced on mergerFS)
But, since OMV mounts/recognize the drives by UUID, it shouldn't make any difference.
Can you:
apt install tree
Post the output in a codebox ( </> ) of:
tree /dev/disk/
In last case, just to do a test, see the spoiler:
If OMV refuses to mount them, you can try one last thing:
With both partitions made:
mkdir -p /mnt/exosp1
mkdir -p /mnt/exosp2
mount -t ext4 /dev/sdX1 /mnt/exosp1
mount -t ext4 /dev/sdX2 /mnt/exosp2
See if you can copy some folder to either one of them:
cp -avr /home/<an existing user name> /mnt/exosp1/
cp -avr /home/<an existing user name> /mnt/exosp2/
ls -al /mnt/exosp1/
ls -al /mnt/exosp2/
If the files are copied, then the partitions are mounted and running.
-
Used gdisk, created 8TB and 6.6TB partitions.
Screenshot attached for fdisk -l /dev/sda.
lsblk -f | grep /dev/sda doesn't return anything. Just gives me another blank line.
Partprobe also doesn't return anything in CLI.
After rebooting, the 16TB went back to sdb1, but I see sdb1 and sdb2.
--
I see both partitions in file systems, but when I try to mount sdb1, the same error appears.
Code
Alles anzeigenError #0: OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color fstab 2>&1' with exit code '1': debian: Data failed to compile: ---------- Rendering SLS 'base:omv.deploy.fstab.15mergerfsfolders' failed: Jinja error: 'xsystemd' Traceback (most recent call last): File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 497, in render_jinja_tmpl output = template.render(**decoded_context) File "/usr/lib/python3/dist-packages/jinja2/asyncsupport.py", line 76, in render return original_render(self, *args, **kwargs) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1008, in render return self.environment.handle_exception(exc_info, True) File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 780, in handle_exception reraise(exc_type, exc_value, tb) File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 37, in reraise raise value.with_traceback(tb) File "<template>", line 25, in top-level template code File "/usr/lib/python3/dist-packages/jinja2/sandbox.py", line 385, in getattr value = getattr(obj, attribute) File "/usr/lib/python3/dist-packages/openmediavault/collectiontools.py", line 126, in __getitem__ return dict.__getitem__(self, key) KeyError: 'xsystemd' ; line 25 --- [...] {% for dir in branchDirs %} {% if dir | length > 2 %} {% set _ = branches.append(dir) %} {% if '*' not in dir %} {% set parent = salt['cmd.shell']('findmnt --noheadings --output TARGET --target ' + dir) %} {% if not pool.xsystemd %} <====================== {% if parent | length > 1 %} {% set _ = options.append('x-systemd.requires=' + parent) %} {% endif %} {% endif %} {% endif %} [...] --- in /usr/share/php/openmediavault/system/process.inc:195 Stack trace: #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute() #1 /usr/share/openmediavault/engined/rpc/config.inc(167): OMV\Engine\Module\ServiceAbstract->deploy() #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array) #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array) #5 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(917): OMV\Rpc\Rpc::call('Config', 'applyChanges', Array, Array) #6 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->mount(Array, Array) #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #8 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('mount', Array, Array) #9 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'mount', Array, Array, 1) #10 {main}
The problem will be fixed in the next release of the mergerfs plugin, see https://github.com/OpenMediaVa…lt-mergerfsfolders/pull/8.
-
Will that release also work with OMV 6? I'm holding off on upgrading because I don't want to break my current setup.
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!