Sorry for the delay, but life took over.
Have you fixed it?
No worries, I get it. as far as my problem, still no luck.
Sorry for the delay, but life took over.
Have you fixed it?
No worries, I get it. as far as my problem, still no luck.
In that case, you can create a partition via CLI, format it and (hopefully) OMV will/should pick it.
For that, and since you have a 6Tb disk, you can use "gdisk" instead of "fdisk" (gdisk deals better with >2Tb disks, IMO).
apt install gdisk
Then follow these instructions to create a GPT partition table and a partition with Linux signature for the whole partition:
See only the Topics:
Creating a new GPT partition table
Creating a new partition
After a reboot, goto OMV webGUI and you should be able to format the partition with ext4 FS and have it available.
I've been following this thread with interest, I think the issue here is related to #7 with the output from wipefs -n /dev/sdc this shows a PMBR (Protected Master Boot Record) this correlates to the original error in #1.
#1 references a missing NTFS signature, this and the PMBR suggests that the drive was preformatted with NTFS.
The way forward would be to run dd on the drive and write zeros to the whole drive, this is out of my remit as I use dban
Alles anzeigenIn that case, you can create a partition via CLI, format it and (hopefully) OMV will/should pick it.
For that, and since you have a 6Tb disk, you can use "gdisk" instead of "fdisk" (gdisk deals better with >2Tb disks, IMO).
apt install gdisk
Then follow these instructions to create a GPT partition table and a partition with Linux signature for the whole partition:
See only the Topics:
Creating a new GPT partition table
Creating a new partition
After a reboot, goto OMV webGUI and you should be able to format the partition with ext4 FS and have it available.
Hi Soma, well method worked perfectly. I guess your right, OMV is a bit of a bitch with the larger drives. Drive is now formated and mounted with NO error. Can't thank you enough for the help. Now the next thing is for me to figure out the best way to do a complete back of ALL my data ( Movies, Music, TV, Configs )
Having the same issue with a 3 Tb drive.
I tried mounting the drive and then openmediavault crashes with the following error: 'Removing the directory '/' has been aborted, the resource is busy.' and the system reboots.
Then I re-partioned and formatted the drive added it to fstab and rebooted so now the drive is visible in 'filesystem' but when I try to add a shared folder the drive is not showing up?
Man! I followed these instructions to a T, hoping they would work, but I'm still getting an error.
I am running OMV 5 and have OMV installed on a 250GB Samsung SSD. I started with two drives, a WD Red and WD White (both 8TB) and used mergerfs to 'create' a single, 16TB drive. That drive has since been filled, so I recently added a 16TB Seagate Exos drive to my server.
Unfortunately, when I installed it, formatted it through OMV (EXT4), etc., I can't get it to mount. It formats fine, it's detected by the OS, but when I try to mount, I get the same error 100% of the time:
Error #0:
OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color fstab 2>&1' with exit code '1': debian:
Data failed to compile:
----------
Rendering SLS 'base:omv.deploy.fstab.15mergerfsfolders' failed: Jinja error: 'xsystemd'
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 497, in render_jinja_tmpl
output = template.render(**decoded_context)
File "/usr/lib/python3/dist-packages/jinja2/asyncsupport.py", line 76, in render
return original_render(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb)
File "<template>", line 25, in top-level template code
File "/usr/lib/python3/dist-packages/jinja2/sandbox.py", line 385, in getattr
value = getattr(obj, attribute)
File "/usr/lib/python3/dist-packages/openmediavault/collectiontools.py", line 126, in __getitem__
return dict.__getitem__(self, key)
KeyError: 'xsystemd'
; line 25
---
[...]
{% for dir in branchDirs %}
{% if dir | length > 2 %}
{% set _ = branches.append(dir) %}
{% if '*' not in dir %}
{% set parent = salt['cmd.shell']('findmnt --noheadings --output TARGET --target ' + dir) %}
{% if not pool.xsystemd %} <======================
{% if parent | length > 1 %}
{% set _ = options.append('x-systemd.requires=' + parent) %}
{% endif %}
{% endif %}
{% endif %}
[...]
--- in /usr/share/php/openmediavault/system/process.inc:195
Stack trace:
#0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute()
#1 /usr/share/openmediavault/engined/rpc/config.inc(167): OMV\Engine\Module\ServiceAbstract->deploy()
#2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)
#3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
#4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)
#5 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(917): OMV\Rpc\Rpc::call('Config', 'applyChanges', Array, Array)
#6 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->mount(Array, Array)
#7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
#8 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('mount', Array, Array)
#9 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'mount', Array, Array, 1)
#10 {main}
Alles anzeigen
The people in r/selfhosted (reddit and Discord) have told me there's a salt issue, but I've tried all sorts of omv-salt commands to rectify that to no avail. I stumbled upon this post and it sounded SO similar to mine, so I used gdisk, created a partition table, etc., then deleted and re-added the drive through the OMV GUI. It formatted fine, so once it was done initializing, I tried mounting it and got this error AGAIN.
I'm hoping to finally get it mounted so I can add its absolute path to mergerfs and open up another 16TB. Otherwise, my drives are full.
If anyone can help, I would be indebted to you. Soma?
Ok, solved the problem.
This is definitly a problem of/bug in the openmediavault interface.
Solution:
If the drive is visible within the 'drives' tab of the openmediavault interface do the following:
Open a terminal within cockpit (or use the terminal display after startup) with the 'root' user.
Discover the uuid of the drive by typing: blkid and remember the uuid of the drive you want to mount.
See if the folder of this drive exists within the dev folder by typing: cd /dev/disk/by-uuid.
Then type: ls -l and see if the uuid of the drive is listed (it should be since the drive is vissible).
Now go to the folder srv with: cd /srv and see that a folder exists with the name dev-disk-by-uuid-<yourdrives UUID>.
If it does not exist create it with: mkdir dev-disk-by-uuid-<yourdrives UUID> and give it the proper userrights (same userrights as the existing drives? -> list user rights of existing drives with: ls -l and change userrights with chmod u=rwx,g=rwx,o=rwx).
Now edit fstab to ensure that the drive is mounted each time you start openmediavault with: pico /etc/fstab.
Add a line at the end of fstab like this: /dev/disk/by-uuid/<yourdrives UUID> /srv/dev-disk-by-uuid-<yourdrives UUID> ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
Save the fstab file.
Now create your own shared folders on this drive with: mkdir /srv/dev-disk-by-uuid-<yourdrives UUID>/<your shared folders name> and give it the proper userrights with chmod.
To access this shared folder with Windows make it a samba share by editing the smb.conf file as follows: pico /etc/samba/smb.conf and copy the contents of an already samba share folder to the end of the file and edit it by replacing the path with: path = /srv/dev-disk-by-uuid-<yourdrives-UUID>/<your shared folders name>/ .
Reboot the system and now your created folder is visible in Windows. However, the shared folder is NOT vissible within the openmediavault interface!
Alles anzeigenOk, solved the problem.
This is definitly a problem of/bug in the openmediavault interface.
Solution:
If the drive is visible within the 'drives' tab of the openmediavault interface do the following:
Open a terminal within cockpit (or use the terminal display after startup) with the 'root' user.
Discover the uuid of the drive by typing: blkid and remember the uuid of the drive you want to mount.
See if the folder of this drive exists within the dev folder by typing: cd /dev/disk/by-uuid.
Then type: ls -l and see if the uuid of the drive is listed (it should be since the drive is vissible).
Now go to the folder srv with: cd /srv and see that a folder exists with the name dev-disk-by-uuid-<yourdrives UUID>.
If it does not exist create it with: mkdir dev-disk-by-uuid-<yourdrives UUID> and give it the proper userrights (same userrights as the existing drives? -> list user rights of existing drives with: ls -l and change userrights with chmod u=rwx,g=rwx,o=rwx).
Now edit fstab to ensure that the drive is mounted each time you start openmediavault with: pico /etc/fstab.
Add a line at the end of fstab like this: /dev/disk/by-uuid/<yourdrives UUID> /srv/dev-disk-by-uuid-<yourdrives UUID> ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
Save the fstab file.
Now create your own shared folders on this drive with: mkdir /srv/dev-disk-by-uuid-<yourdrives UUID>/<your shared folders name> and give it the proper userrights with chmod.
To access this shared folder with Windows make it a samba share by editing the smb.conf file as follows: pico /etc/samba/smb.conf and copy the contents of an already samba share folder to the end of the file and edit it by replacing the path with: path = /srv/dev-disk-by-uuid-<yourdrives-UUID>/<your shared folders name>/ .
Reboot the system and now your created folder is visible in Windows. However, the shared folder is NOT vissible within the openmediavault interface!
I have not read the whole post, but what you tell is working against OMV. No wonder that it causes problems.
To access this shared folder with Windows make it a samba share by editing the smb.conf file as follows: pico /etc/samba/smb.conf and copy the contents of an already samba share folder to the end of the file and edit it by replacing the path with: path = /srv/dev-disk-by-uuid-<yourdrives-UUID>/<your shared folders name>/ .
Really ????? Why do you use OMV if you do everything manually?
To all of you reading the above post, ignore it please.
I really want to use the interface of openmediavault but I'm stuck at adding a new drive: the whole damn system reboots if I want to add a new drive!
So to all: for now you can use this as a workaround until this bug is fixed.
votdev: until you have a better answer/workaround for this bug I really want to ignore your kind of comments!
I really want to use the interface of openmediavault but I'm stuck at adding a new drive: the whole damn system reboots if I want to add a new drive!
So to all: for now you can use this as a workaround until this bug is fixed.
votdev: until you have a better answer/workaround for this bug I really want to ignore your kind of comments!
Maybe explain what has happend AND how it can be reproduced. According to your posts i have the feeling you are doing things that work against how OMV internally works. Adding a 3 TiB disk via the UI is not rocket science, this is done thousands of times by OMV users already.
I tried mounting the drive and then openmediavault crashes with the following error: 'Removing the directory '/' has been aborted, the resource is busy.' and the system reboots.
You didn't tell us that you are trying to delete the mount point entry. This message only appears in that scenario; it is an indication that a system service or process, e.g. SMB, is accessing the file system. In that case you need to remove the SMB share, apply the settings, remove the share and only then the file system can be removed.
But based on your posting i assume a process which is not managed by OMV is accessing the disk. So make sure this process is stopped before removing the file system.
See the comment in my other post.
See the comment in my other post.
Just for reference: Openmediavault crashes after mounting new drive
Can't wipe my external HDD via USB3.0
Why do you want to wipe the drive?
Are you able to mount it in OMV6?
Hi guys!
I've just tried to follow up the whole thread of the forum, but it didn't work for me.
I have a 1TB HDD which I try to mount and use as my SMB share.
The OMV sees the disk. But when you try to mount it and accept the "change of configuration" - it show a failure. When you click "Ok" - you see that the disk already is mounted.
When I try to create an SMB share in some attempt I even can't find it in a list of devices. But sometimes magic happens, I can create an SMB share but later on it shows a error of using it.
I read the thread of forum several times and tried to implement all the suggested ways to solve the problem. But still no result. It might be I have a lack of linux exp., so please, help!!!
Are you running a desktop environment on the pi?
Hello Zoki!
Nope, I use only command line when working with Pi directly. Otherwise only with the Web GUI of the OMV.
Are you running a desktop environment on the pi?
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!