your error looks like there is a unknown directive in the profited config file.
Run
dpkg --configure -a
and post the output
Edit: added missing -a in command
your error looks like there is a unknown directive in the profited config file.
Run
dpkg --configure -a
and post the output
Edit: added missing -a in command
Had this last night. For me the trigger seems to he having mergerfs mounts in my fstab. Fixed with remounting rw, commenting those lines, rebooting, and unticking the fstab checkbox in the mergerfs mount settings in the UI (then applying and rebooting to verify).
Not sure what changed to cause this. I thought it was that I'd set cache.files=partial in the mergerfs opts, but removing that didn't help. The only relevant looking thing error in dmesg is:
[ 3.753464] scsi 2:0:0:1: Wrong diagnostic page; asked for 1 got 8
[ 3.753476] scsi 2:0:0:1: Failed to get diagnostic page 0x1
[ 3.753481] scsi 2:0:0:1: Failed to bind enclosure -19
Which can supposedly be caused by delays spinning up a sleeping HDD, but no idea why that would suddenly be an issue when it hasn't before
your error looks like there is a unknown directive in the profited config file.
Run
dpkg --configure
and post the output
Error, because
missing
... says nothing, no message, hidden status.
Had this last night. For me the trigger seems to he having mergerfs mounts in my fstab. Fixed with remounting rw, commenting those lines, rebooting, and unticking the fstab checkbox in the mergerfs mount settings in the UI (then applying and rebooting to verify).
Not sure what changed to cause this. I thought it was that I'd set cache.files=partial in the mergerfs opts, but removing that didn't help. The only relevant looking thing error in dmesg is:
Code[ 3.753464] scsi 2:0:0:1: Wrong diagnostic page; asked for 1 got 8 [ 3.753476] scsi 2:0:0:1: Failed to get diagnostic page 0x1 [ 3.753481] scsi 2:0:0:1: Failed to bind enclosure -19
Which can supposedly be caused by delays spinning up a sleeping HDD, but no idea why that would suddenly be an issue when it hasn't before
Good idea. commenting out mergefs line in fstab mounts filesystem with rw. So "success", but doen't help because I need that
Post edited
On what version of OMV are you right now?
What kind of problem are you experiencing? If it is only the mergerfs, remove it, upgrade to OMV 6 and use the new plugin.
Something with 5.2x. Unfortunately neither browser works nor omv- commands to ask version. What other command can show me the version?
Other problems:
What other command can show me the version?
dpkg -l | grep openmedia
also post
echo $PATH
dpkg -l | grep openmedia
also post
echo $PATH
-> 5..26
-> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
What is the output of:
ls -lah /usr/sbin/omv-*
What is the output of:
ls -lah /usr/sbin/omv-*
A big fu***ing oops
gives no output. Problem found?
Trying
gives
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
php-pam : Depends: phpapi-20180731
E: Unable to correct problems, you have held broken packages.
The idead for reinstallation comes closer
Be warned: I had this "suddenly Read-only" from 2 sources relatively soon after upgrading from OMV 5 - > 6.
1. was a bad cable on the root SSD drive, dmesg saw errors happen for / and then / was remounted read-only, lost a bit of data on / because of it, but after a few packages reinstall everything was fine. Could have happened any time I guess.
2. after adding a 4'th drive to mergerFS pool.
This one was a bit more complex:
Apparently, there's a limitation of 255 chars when concatenating drives with mergerfs, and my naming scheme was the classic long one with long UUID's.
This led to "filnemame too long" warning/error in dmesg when rebooted, and read only root
The strangeness is at he mergerFS plugin development for OMV6, this limitation was encountered, and a workaround was found: to use /etc/fstab to concatenate the drives for mergerFS (since apparently the /etc/fstab was thought to be immune to this limitation)
On my system, this /etc/fstab workaround worked... and didn't work at the same time:
a) long filenames + NOT using fstab = read-only on reboot (too long file name in dmesg)
b) long filenames + YES using fstab = read-only on reboot (still too long file name in dmesg); BUT if remounting / in RW mode, then manually mounting, the mergerFS worked without complain
The fix was to cut the names of the drives used for mergerfs, (with / in rw mode: mount -o remount,rw /), in /etc/fstab to be shorter (i.e. all names shortened with * wildcard from "/srv/dev-disk-by-uuid-31741764197491757717" to "/srv/dev-*7717"), then even after restart all was fine.
I now need to remember NOT to touch anything in the OMV configuration related to filesystems ...
What is the output of apt-cache policy php-pam
But given the number of errors you have, reinstallng might be less work.
Few days I reinstalled with with a plan new installation of OMV 6.0.27-1.
Two external 7 tb disks are attached with USB.
Yesterday night the OMV halted with similar errors as above.
[ 3.753464] scsi 2:0:0:1: Wrong diagnostic page; asked for 1 got 8[ 3.753476] scsi 2:0:0:1: Failed to get diagnostic page 0x1
[ 3.753481] scsi 2:0:0:1: Failed to bind enclosure -19
Indicates a problem in the communication between the kernel and the USB drive. Did the drive go to sleep and did not wake up?
Some have reported that changing the cable might help, but I think this is a red herring.
Hi everyone, same problem here...using FIlezille to transfer some data, the transfer starts correctly for a couple of files and then suddenly stops with a "Read-only file system" message...
EDIT: it also happens with SAMBA
I seem to have hit this read only problem too. I have two usb drives as backup (using Good Sync software). Both drives started giving the error in Good Sync unable to upload file access denied. In OMV the drives look fine - they are in guest mode. I did get an error to run fsck but that cleared when I updated my OMV - I had not done this for sometime. If I check logs now I cannot see any errors about read only drives.
But if I try to copy a file to a OMV drive folder then it's denied so there is definitely some error.
I did feel it was a problem with folder names. But my fstab drive names have not changed. But I did make some folders with long names about the time the drives must have gone to read only (and even with a ".") so I think that may be the issue.
I have read the feed but I cannot find a fix for my issue. Any help much appreciated. I would like to save the data in the backups because there is +300GB on them!
Apologies I have just checked SMB/CIFS diagnostics and the drive is labelled locked and read only! Is there a way to fix this. Thanks.
Just to add. I can attach my usb drive to my macbook and run first aid with disk utility. But this fails to solve the problem. It seems there is more than one volume on the drive that Disk Utlility cannot unmount. So it checks the volume that as the folder/files and reports they are okay. But it cannot fix the other volume(s).
Is there some software that could fix this? Or does it look like I need to format the drive. I used exfat before - maybe I should try a different format?
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!