I have. Please look further right. There you see the "dockertest" folder I created following your recommendation. It was empty before I reinstalled docker. Now it contains a folder structure, which was not there before...
Posts by m4tt0
-
-
-
Yes, all done. And they all correctly point to the new SSD. All my containers are up and running as well after fixing the installation with "omv-upgrade".
-
I have it working, but only by manually running "omv-upgrade" as indicated in the original post.
If I try to restinall docker via the docker-compose settings menu, I still get the same errors I initially reported. Whenever I do that, I have to run "omv-upgrade". So something is still wrong.
With "temporary path" I referred to the directory I created on my mergerfs to temporarily store my AppData and compose folders. This was only done to swap the SSD. With "persisting" I referred to ryecoaaron 's observation that this path still appears in the output of journlctl. I therefore assumed that something is wrong with the configuration.
-
OK, that path is wrong. I used the /srv/mergerfs/data/ path to store the old SSD content, i.e. the AppData, docker and compose directories. I then changed the docker path in the docker-compose settings to the mergerfs path, to get rid of the the old SSD reference. That must have been before I uninstalled the docker-compose plugin. After reinstalling it, I set a different path, i.e. /srv/dev-disk-by-uuid-5c8e8fc1-94ee-431e-b45e-7939d6339072/docker, which references the new SSD. That is also the path that shows in the docker-compose Settings page. I've just checked it again.
And yes, I have deleted the directories on my mergerfs after copying them back to my new SSD, so I'm not surprised they are not found anymore. I wonder why the temporary path persists in the configuration...
-
Thanks for looking into it and your explanations!
As to your question: No, I did not copy the "docker" directory to the new ssd. I've read the omv-extra Wiki entry on docker (which is actually excellent!) and understood that it will be rebuild with a new docker installation.
I'm attaching the output of the journalctl -u docker command. It's a txt file, but I had to zip it, not to exceed the file size limit.
-
I've been running docker for some time on my OMV6 system. I've also upgrade to the new docker-compose system, without major problems.
Today, I had to exchange an SSD drive, which contained my AppData, docker and compose folders.
Obviously, I had to remove all references from the old SSD filesystem in order to swap the drive.
It eventually ran well, but I struggled to properly transition the docker environment itself:
- I first stopped and removed all containers (using the "down" button on the "Files" section)
- I then tried to find a place to stop the docker environment, but could not find it
- I eventually decided to uninstall the docker-compose plugin, as it obviously contained references to the SSD filesystem as well.
- I did not disable the docker repo under OMV-extras, because I thought that it just adds the apt repo to the system. Maybe that was wrong?
- In any case, after this I was able to unount the old SSD, insert the new one and reestablish the AppData and compose folders, shared folders, etc.
- I then "apt-cleaned" the docker repo and reinstalled the docker-compose plugin
- In the docker-compose settings, I inserted the "new" shared folders, which I created on the new SSD (of course with the "old" content").
- I saved the changes and clicked "reinstall docker", but the installation fails. It still does when I do it. Error message attached.
- I can fix the installation by running "omv-upgrade" via CLI. It says that docker-ce is not fully installed. Afterwards everything runs fine.
Any idea what goes wrong here? Or is it a problem with the new docker integration?
-
Hi Chiller8891,
I'd first check whether the HDD is actually still ok. Can you attach it to another computer and mount the drive there? Can you see files on that drive? And can you ensure that the drive is not 100% full, which may have led to the problems I were facing?
-
Understood. And Ctrl-R does the job. Thanks votdev and ryecoaaron!
-
I'm currently syncing some hard drives, which will take about 2 days. When the rsync window is open, the option "close" is greyed out. The option "stop" is available, but of course I do not want to disrupt the process. When I open another browser windows and log into OMV, I get a note that a background process is running. I can go into the message center and attach to it. The rsync window reappears. It would be nice to have an option to detach it again and relegate it into the message center, so that you don't have to log in twice or multiple times...
If that option exists, I wasn't able to find it...
-
I could not fix the issue but found a workaround.
First, some more observations as I found that others ran into similar issues, and as I believe others probably will in the future.
The problem was an inconsistency between the OMV configuration and the underlying debian linux configuration. I finally realized that the USB drive was completely full. I assume that this was the root cause leading to the "missing filesystem" error in the first place, although I believe that this should not happen on a NAS system.
I freed up some space on the drive thinking this may help with mounting the existing filesystem. It didn't. Following recommendations from other threads, I tried to restore consistency by running omv-salt deploy run fstab. This removed the problematic mount point entry from the fstab (but did not unmount the drive). In any case, whenever I rebooted the server, the rogue fstab entry reappeared. I don't know whether this is some kind of "automount" feature for (USB) drives, but I could not get rid of it.
As I was stuck, I finally decided to remove the affected disk from the OMV system-configuration completely. In order to achieve that, I did the following:
- Per CLI: omv-salt deploy run fstab to create a "consistent" fstab.
- I then manually unmounted the affected USB drive and removed the empty /srv/dev-by-uuid-5950.. directory
- Back into the OMV GUI: Storage / Disks: Here I simply wiped the USB drive.
- Under Storage / Disks I created a new ext4 filesystem on the drive and, yes, the corresponding device was finally visible to OMV
- The filesystem was still not mounted automatically after the creation process, but I could mount it using the "play/arrow" button on that page.
- Finally, the application of the configuration changes also succeeded without quota or other errors.
I'm now running rsync to create a new backup. Of course you cannot do this, if you require the data on the affected drive, or you have to copy it somewhere else first. For what it's worth...
I'll keep the thread open as unresolved, as I wasn't able to fix the actual problem.
-
I'm sure it is a configuration inconsistency now: The USB HDD is actually mounted under /srv/dev-by-uuid-5909.... A corrsponding fstab entry exists as well. Could be a relic. The filesystem does still not appear in the GUI under storage/filesystems though. I also cannot create a shared folder referencing the filesystem. For OMV it is invisible.
When I unmount the filesystem manually and try to mount it via the GUI, I get an error that external quotas can no longer be used on ext4 filesystems. This is driving me mad!
Anybody who can help with this?
-
I'm experiencing a weird problem with an external USB HDD, which I use for backups:
I realized some days ago, that the backup tasks broke down. OMV flagged the backup filesystem on my USB HDD as "missing". I've detached the USB HDD from the server and reattached it again. It apparently had issues. At least, I was not able to mount it, neither through OMV, nor manually via the CLI. Using fsck I was able to repair the filesystem. At least I was able to mount it manually again afterwards. Scanning through the directories the data looked ok. At this stage I unmounted the drive again and tried to reintegrate it into the OMV environment. I started removing all references to the old mount point, in rsync in the SMB shares. I finally removed the old mount point itself and applied all changes. When I look into the discs I see the device (/dev/sdj) as "unmounted". If I go to filesystems I select /dev/sdj1 and use the "play" button to mount an existing filesystem. A window pops up asking me about the filesystem to mount. However, the drop-down list to select the (existing) filesystem is empty.
I assume that the HDD crashed at some stage. As I can mount it manually, I don't think it is broken though. I'm not sure whether this an inconsistency in the configuration. In any case, any ideas to mount the HDD or to debug the problem further, will be appreciated.
-
Thanks, guys. I got it. I saw from the output that the quota check failed on exactly one drive. It said that new quota files could not be created. I compared the drive with the others and found ".new" versions of the quota user and group files on the critical one. I've simply moved them out of the drive's root folder. Afterwards, the quota check succeeded. I was also able to create a new mergerfs pool again. Not sure how this happened. Maybe some corruption as the original pool failed. Anyways, it seems all is fine now. Again, many thanks for your help!!!
-
Yes. Output attached...
-
Thanks both, for (still) trying to help me!!!
ryecoaaron: The output of your command just delivers the example commentary, but no content, i.e.
Code
Display More<filesystem> <!-- <quota> <uuid>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</uuid> <fsuuid>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|xxxx-xxxx</fsuuid> <usrquota> <name>xxx</name> <bsoftlimit>0</bsoftlimit> <bhardlimit>xxx</bhardlimit> <isoftlimit>0</isoftlimit> <ihardlimit>0</ihardlimit> </usrquota> <usrquota> ... </usrquota> <grpquota> <name>xxx</name> <bsoftlimit>0</bsoftlimit> <bhardlimit>xxx</bhardlimit> <isoftlimit>0</isoftlimit> <ihardlimit>0</ihardlimit> </grpquota> <grpquota> ... </grpquota> </quota> <quota> ... </quota> --> </filesystem>Soma You are right. I don't need quotas on my system. The only other occurences of the string "quota" in the config.xml file apparently sits in the fstab mounting options of several drives. Here one example from the output of "grep -i -e 'quota' config.xml":
Code<opts>defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>Are you suggesting to remove the usrjquota=aquota.user,grpjquota=aquota.group, entries of each?
-
Thanks so much, ryecoaaron! This worked. I had a relic filesystem and a relic shared folder which I could remove via editing the config.xml, too. It seems my system is consistent again. At least I can access the Filesystems and the Shared Folder tabs again.
Something's still wrong though: I've reinstalled the mergerfs plugin via the web-UI / Extensions, and tried to reestablish the merged pool with the remaining, intact drives. When I apply the configuration change, I still get the "quota" error message from above.
I've checked each and every disk in that pool via the "filesystems" tab. When I click on "Quota" the result is always the same: I get the list of groups and users with access to the disk, "used capacity" shows as "0 B" and quota as "0 MiB", too. Again, I cannot remember setting any quotas. I cannot remember when those drives were formatted though. I think it was under OMV5, maybe under OMV4, but less likely. Really sorry, for having to come back to you again, but any idea how to go from here?
-
Thanks and sorry ryecoaaron. I waited two days and thought that you ran out of ideas as the error messages do not really make sense.
I've purged openmediavault-mergerfs, as you've suggested, and rebooted my server. The "Mergerfs" tab is gone. I still cannot access the "Filesystems" and "Shared Folder" tabs. They throw the following error message now:
Code
Display MoreNo file system backend exists for 'fuse.mergerfs'. OMV\Exception: No file system backend exists for 'fuse.mergerfs'. in /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc:467 Stack trace: #0 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->getList(Array, Array) #1 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('getList', Array, Array) #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatushp...', '/tmp/bgoutputge...') #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure)) #5 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(519): OMV\Rpc\ServiceAbstract->callMethodBg('getList', Array, Array) #6 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->getListBg(Array, Array) #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #8 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getListBg', Array, Array) #9 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'getListBg', Array, Array, 1) #10 {main} -
I'm still stuck. Is there at least a way to nuke the whole mergerfs configuration from the CLI and set it up via the web-UI from scratch?
votdev Do you have any idea maybe?
-
Definitely not! But I cannot double-check under "Filesystems" as loading that page fails, too (c.f. bullet 2 in the original post)...