Posts by ParadingLunatic

    For me I have two jobs scheduled in the GUI. One enabled, one disabled.

    Both jobs exist in openmediavault-userdefined, both are executing as if they're both set to enabled even though one is disabled.


    The GUI doesn't get the enabled or disabled status from openmediavault-userdefined, that's just where it stores the job. It gets the info for the gui from config.xml.


    Looking at config.xml the for the disabled job, the enable flag is <enable>0</enable>...which says it should be disabled.

    In config.xml, the enabled job is <enable>1</enable>


    So somewhere in OMV's scripts when it's updating the cron jobs, it appears to be ignoring whether or not the job is supposed to be enabled.

    Came here to say "Me to". Only noticed this recently because I added a cron job this week that I would normally keep disabled and will only manually run it. I originally thought that maybe me enabling it, then disabling it caused some odd issue so I deleted it entirely, then recreated it but left it disabled when I saved it. It's still running.


    Running OMV 5.5.19-1


    the output I'm getting for ls -al /etc/cron.d/openmediavault-userdefined

    -rw-r--r-- 1 root root 558 Dec 25 07:22 /etc/cron.d/openmediavault-userdefined

    Well...I'm now at a point where I really don't know. I've pointed docker to a new location on my main storage disk (that is NOT a shared folder) and let it create the folder itself. Tried pulling and creating a new instance of FreshRSS (all this is while SSH in as root) just to test it out using docker-compose....failed. Loads of permission denied when it was trying to chmod during the setup. Checked to see if dockerd is running as root and it is. Decided to change it back to /var/lib/docker and try that as well. Same problem. So now I have no clue what the issue is or why I'm having it.

    Ok, so I THINK the issue might have to do with a umask issue but not sure how/where. I checked the shared folder where my docker data is stored (we'll called it /dockershare/docker/...) at /dockershare/docker/containers/xxxxxxxxxxxxxx......every single hosts and resolve.conf file were all 640 with owner of root:root instead of 644 and root:root.


    I really would love to get this fixed as it'll allow me to finally move some of my dockers off of the VM and on to OMV....as well as get rid of future headaches when I roll out future containers.

    I'm not sure if this is an OMV issue, a Docker issue, a docker config issue, an image issue, or a local filesystem rights issue. It seems like some of my Docker containers that run their processes as a user other than root, have problems with DNS. For example, my Nextcloud container has been unable to check for updates. The process runs as www-data. When I run "docker -it -u www-data nextcloud /bin/bash" and then run "curl http://www.google.com", it fails to resolve the url. If I connect to the container as "docker -it nextcloud /bin/bash" and run "curl http://www.google.com" it returns data as expected.


    I checked the rights of /etc/resolve.conf and it was set to rw only for root and no other permissions (600). After setting the permissions to 644, suddenly everything worked.


    So my question is...is this issue caused by the container, by docker, by the docker config, or by the underlying FS permissions?


    I have docker using a shared folder instead of the default /var/lib/docker location. When I created the Nextcloud instance, I pointed it to the location I wanted and it created the folder for me so the rights on the "config" folder should be correct. Although the config folder is mounted to /var/www/html within the container anyways.


    I have docker running on an Ubuntu VM (completely separate from OMV), and I've never once run into this issue on it. Granted it's also using the default /var/lib/docker. When I've run into this issue on my OMV system, I would even test it on my Ubuntu system and wouldn't have the problem there.

    Hi all.

    I tried to install Home Assistant on OMV5 through portainer which I successfully did. But after I create my Home Assistant account and log into Home Assistant, i do not see the addons or supervisor icons which many tutorials use to help setup Home Assistant - have I missed something?

    This probably has something to do with the recent announcement (which has also been reversed) that Home Assistant will no longer support generic linux installs, or other unsupported installs. If you only installed the Home Assistant container, you're probably missing the Supervisor container which is what gives you all of the addons. If that's what you're looking for, you might have a harder time getting that running (and keeping it running) soon.

    Yup, I actually I have a user called dockeruser which is given rights to the docker group. All of my containers are created and run using this account and all seem to work fine. The only difference I notice is that this particular container does a chown 1000:1000 and then switches to user "node" within the container during runtime. I believe it's the only container I'm running that does something of the sort. Most of them you pass a UID/GID as an environment variable, although I'll still do the docker run as the dockeruser account.

    I have numerous docker containers running all without issues, except for one which I had to do a workaround to get working.


    It appears all of my docker containers run the apps inside them under a root instance.


    The one container I'm having a problem with runs it's application (a node instance) as a non-root user. The problem is it doesn't have any network access. I tried to troubleshoot this by attaching to the shell session within the container and tried an nslookup which failed. Tried to run a ping but ping required root access (which was odd but whatever). I then closed the session and reattached to the shell as root and had no issues with nslookup or ping.


    I installed docker on an ubuntu VM, pulled the same image and copied the folder with all of the config files over to it, mapped the volumes, etc and ran the docker container and it had no problems at all running as the image was written.


    Back on OMV, I deleted the image, and using portainer I copied the dockerfile information but left out the following lines:
    RUN chown 1000:1000 -R /app
    USER node


    Ran the image, pointed the volumes the same as before, and it worked just fine.


    Do the same, but re-add those lines, and no network connectivity.


    Is there any specific reason why the docker install on OMV will deny network rights within a container when the program running in the container is run by a user other than root? I'm pretty sure I tried running the container as a different user and had the same issues.

    It is really tough when I don't write the code and have no good way to test. I made some changes. Can you test them? If yes:
    sudo wget -O /usr/sbin/omv-snapraid-diff https://raw.githubusercontent.com/OpenMediaVault-Plugin-Developers/openmediavault-snapraid/master/usr/sbin/omv-snapraid-diff
    sudo chmod +x /usr/sbin/omv-snapraid-diff

    Looks good! Diff finished without errors.

    I'm having an issue with Snapraid ever since the 3.7.6 update.


    I'm getting a syntax error during the cron job that runs nightly.



    /usr/sbin/omv-snapraid-diff: line 490: syntax error near unexpected token `else'
    /usr/sbin/omv-snapraid-diff: line 490: `else'

    Actually I am running kodi 18.2 on a Raspberry Pi 3 B+ and I running a NAS with OMV 4.19.0-0.bpo.2-amd64. My media is also stored on a pool of EXT4 file systems using the Union Filesystem with SnapRaid. The Issue started, when I had to change a disk in the SnapRaid array.
    I had the same Problems on Kodi 18.0.
    Playback stops after aprox 1 hour, when I restart the playback it stops after a few seconds. After restarting the NAS the problem is gone (for a while).
    No Problems when using MiniDLNA on the same NAS. Only have this issue, when using NFS.
    Maybe I should delete everything and reinstall OMV :-(

    Not sure how much that'll help. I just recently upgraded from OMV3 to OMV4 doing a fresh install.


    What options do you have set for you Union File system?


    I have the following set "defaults,allow_other,direct_io,use_ino,func.getattr=newest,noforget"


    use_ino, and noforget are supposed to help with NFS, especially with Kodi. There's nothing in the FAQ for mergerfs when it comes to kodi with NFS and using direct_io, but there is a comment in there about NFS and direct_io helping write speed, so I have it set since I do write files via NFS. Kodi pretty much only reads so that shouldn't be an issue for you.


    Otherwise I don't really know what else you can try doing except SMB or if you really must use NFS, perhaps edit your /etc/fstab on your Raspberry Pi to mount your NFS in v4 fashion (don't use xxx.xxx.xxx.xxx/exports/folder....just use xxx.xxx.xxx.xxx/folder). This won't work though if you're going through with the Kodi gui and browsing NFS as the library used by Kodi doesn't support NFSv4.

    I still have the issue with playback stoping after aprox. 60 minutes.
    I am a beginner with OMV and Linux.
    How can i remove the fsid=0 entries?
    THX

    I'm honestly not 100% sure why I'm not experiencing the problem anymore. Depending on what hardware you're using for Kodi, you might be able to try using NFSv4. You won't be able to do this with a Fire stick though. You could with a Raspberry Pi (once again, depending on what you're using still, I use OSMC which provides a stripped down version of Debian I believe),


    To do this you would need to edit the /etc/fstab to add the NFS mount.


    I may be able to help you, I may not. I have a pretty complex setup. For my primary systems I'm using Emby (sort of like plex but different) in a docker on OMV which pretty much manages my media (adds metadata, downloads subtitles, fanart, etc) and I use the Emby plugin for Kodi on some of my Kodi devices. On the plugin in Kodi I have it configured to use the plugin to play video, which basically streams it over http(s) over a specific port Emby uses. I do have one Kodi device though that doesn't use Emby and just plays direct via NFS. This is the one I WAS having problems with.


    My media is stored on a pool of EXT4 file systems using the Union Filesystem plugin. This makes multiple disks look like one disk. It has its pro's and cons and is further complicating my issues. My media folder is then exported via NFS and SMB.


    What version of OMV are you running?
    Qhat type of file system do you have your media on that your exporting via NFS?
    Also what are you using for Kodi?

    Just thought I'd straighten a few things out.


    The errors messages I was getting on the console of the one client ("NFS: Server xxx.xxx.xxx.xxx error: fileid changed") appeared to be when accessing via NFSv4 only. I switched the client to v3 for 24 hours and didn't get those errors. I've just changed back to NFSv4 recently after changing my exports so the NFSv4 exports don't have the fsid=0 entry. Was reading that could cause the issue I was seeing and read that NFSv4 does not need the fsid= value. I can edit this post or comment later if I notice that I no longer receive the errors. Either way, I doubt any of this had to do with my issues with Kodi.


    I no longer appear to be having issues with NFSv3 using Kodi. I'm not entirely sure if it was a CPU issue (which it could have been since at the time my processor was just about maxed out due to MergerFS and Crashplan hammering it) or if it was switching to no_subtree_check to the exports, or adding noforget to my MergerFS config. I have since moved to an entirely different motherboard and significantly more RAM and CPU power...simply because it fell in my lap.


    All I can say is, after going to OMV4, things seem to take a little more tinkering and tweaking to iron out issues but I think this is all compounded with my configuration (mainly, MergerFS/Snapraid as there seems to be quite a few issues stemming from that in OMV4).


    EDIT:
    Confirmed, I'm not longer receiving the "NFS: Server xxx.xxx.xxx.xxx error: fileid changed" on my one client that was using NFSv4 after removing the fsid=0 entries in the exports file, and switching back to use NFSv4.

    See that's the strange thing. Looking at at the other threads, it looks as if others report that all shared folders that are from the mergerfs pool fail to appear due to the timing. My problem is, it's just the one. All of my shared folders are from my mergerfs pool. But it's just the one shared folder that doesn't appear. I guess it's still possible it's a timing issue. I'll do some more digging to see if that's it.

    I'm not entirely sure why this is happening but it seems to only happen with the "Homes" shared folder, which I have enabled for user homes.


    Rundown....


    "Homes" shared folder created and pointing to the "/srv/XXXXXXX/homes" folder on the filesystem (it's a mergerFS filesystem so the ID is long).
    Plenty of other shared folders created under this same /srv/XXXXXXX/ location as well
    Under users > settings tab > enabled homes folder and point it to that Homes shared folder.
    SMB/CIFS under Home Directories, I have it enabled


    Every single time I reboot the server, I lose the Homes shared folder, which of course breaks access to users shared folders. I can try to browse /sharedfolders/Homes and it's completely empty. All of the other shared folders are fine.


    To fix it, I go to Shared Folders, edit the Homes shared folder, everything is correct. I browse anyways to "point" it back to the location, even though it's the same location as when it was originally set. Save, then apply. And magically it's working again.


    And.....now that I'm typing all of this. I decided to check my /etc/fstab and there isn't an entry for the Homes shared folder. There's one in for all BUT that one, even though I set it every single time I reboot. Is there a reason this isn't getting applied to fstab?


    Edit: Nevermind about fstab. I just realized those are my NFS shares.