Posts by phrogpilot73

    OK, I went through his video twice and figured out how to change what he did for what we need to do on OMV to get it running. Assumption is that you have portainer, have your Host Name/Local IP set up in portainer, and that you don't have git installed. Also that you are running letsencrypt, have a subdomain from duckdns.


    All commands are executed as root in an SSH terminal. Don't use a union pool, address a disk specifically. In this case, I used my 4th Disk (most amount of free space when installing). Replace "/srv/dev-disk-by-label-Disk4" with whatever disk you use.


    1. Create Jitsi directories:

    Code
    mkdir -p /srv/dev-disk-by-label-Disk4/config/jitsi/jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody,jicofo,jvb,jigasi,jibri}
    mkdir -p /srv/dev-disk-by-label-Disk4/config/jitsi/github && cd /srv/dev-disk-by-label-Disk4/config/jitsi/github


    2. Pull what you need from gihub:

    Code
    apt-get install git
    git clone https://github.com/jitsi/docker-jitsi-meet && cd docker-jitsi-meet


    3. Edit the env.example file:

    Code
    nano env.example


    4. Change ONLY the following settings in env.example (do not put any passwords in the beginning of the file). In some cases, all you have to do is uncomment the line (remove the # symbol). Don't fill out any letsencrypt stuff in the env.example. I included the commented lines and what I set the variables to be is below:

    Hit "Ctrl-O" to save, Enter to confirm the file name, then "Ctrl-X" to exit.


    5. Rename the env.example file:

    Code
    cp env.example .env


    6. Generate passwords that the Docker containers will use between themselves:

    Code
    ./gen-passwords.sh


    7. Run the docker compose file to set up the stack:

    Code
    docker-compose up -d

    If you get an error, it may be that you have a different version of docker compose (I'm on OMV 4.X, so I did) - to fix that, edit the docker compose file (docker-compose.yml) and change the "version" at the top of the file to '2'


    8. Once docker compose is done, log into portainer. Join all the jitsi containers to the network you use with letsencrypt (remove them from their existing networks).


    9. Rename all the jitsi containers to the following:

    • Rename docker-jitsi-meet_jicofo to focus.meet.jitsi
    • Rename docker-jitsi-meet_jvb_1 to video.meet.jitsi
    • Rename docker-jitsi-meet_web_1 to meet.jitsi
    • Rename docker-jitsi-meet_prosody_1 to xmpp.meet.jitsi

    10. Stop letsencrypt container


    11. Edit/Copy the jitsimeet.subdomain.conf file for jitsi in your letsencrypt proxyconfs folder. Here's the code if you need to create from scratch. Only thing you should have to change is "server_name meet.*" Change meet.* to whatever your duckdns subdomain is.

    12. Start letsencrypt container


    13. You have to set up the host username/password since you enabled authentication. Log into the console for xmpp.meet.jitsi via portainer. The command he used in the video continually spit out an error for me, so I did a little digging and found this:

    Code
    prosodyctl --config /config/prosody.cfg.lua adduser username@meet.jitsi

    Replace "username" with whatever you want your username to be, after you hit enter it will automatically prompt you for a password.


    14. Restart xmpp.meet.jitsi container.


    Go to whatever subdomain you have set up (using HTTPS), and watch your jitsi go to town!


    I'm planning on doing the same thing, it's on my to-do list today. I did watch this guy's video last night:


    Even though he's on unraid, he's running it in a docker with docker compose and aside from his interface/directory structure, seems like it should pretty much be the same as what we would need to do to get it running.

    OK, in case someone else runs into this problem - I finally got everything up and scanning correctly with the help of several webpages and the Arch Wiki. Looking at the syslog, it looks like ClamAV was being blocked by Apparmor (which makes sense, because I just installed updates on Sunday). Arch Wiki had the fix for that.


    SSH in, and either use su, sudo, or log in as root and type:

    Code
    aa-complain clamd
    nano /etc/clamav/clamd.conf

    The first line sets apparmor to complain vice deny clamd. The second line is editing your clamd configuration file to run as a different user. I used nano, but you can use whatever editor you prefer. Scroll down to "User" and change clamav to root. Finally, type:

    Code
    /etc/init.d/clamav-daemon restart

    This restarts the clamav daemon, and everything (at least in my usecase thus far) works as advertised.

    So, as it turns out - some update to ClamAV may have changed something because now it will not scan any folders. I've tried adding user clamav to users group, added AllowSupplmentaryGroups true to clamd.conf and now I'm running out of ideas.


    SSH'ing into my OMV machine, and running clamdscan --fdpass works, however clamdscan doesn't work without fdpass regardless of user group clamav is in. I set up clamav with the plugin, and all my scheduled scans are in the plugin. Is there a way to change those scheduled scans so that --fdpass gets sent?

    I have the ClamAV plugin installed on OMV 4.whateveriscurrent. I have scheduled scans for a different folder each night. Only one folder has a problem, get a "Permission Denied" response. That folder is Nextcloud, which is running in docker (with my UID/GID).


    The other folders that scan fine have the following permissions:
    rwxrwsr-x
    Owner: root
    Group: users


    Nextcloud folder has the following permissions:
    rwxrwx---
    Owner: My user
    Group: users


    I'd like to be able to scan this folder once a week, since my father-in-law is dumping all his files off his 32 (seriously) thumb drives. He has not really been security conscious in the past, and want to make sure he's not trying to infect my server.


    Is there a way to set permissions/add clamav to a group that will allow it to scan this folder WITHOUT breaking my Nextcloud?

    Yesterday I managed to update the plex docker. I was looking around and just tried copying the URL in "Environment variables" from PLEX_DOWNLOAD and found out that the URL was outdated.. But I entered the right one "https://www.plex.tv/media-server-downloads/" and restarted the docker. Now I went from 1.14.X to 1.18X. As far as I can see, the old image contains the outdated URL and therefor I couldn't reach the download section.

    It looks like that URL is outdated as well. After much digging, the PLEX DOWNLOAD environment variable URL should be: https://downloads.plex.tv/plex-media-server-new


    Much hair pulling, but my docker is now updating to the latest version of Plex.

    You would know if you were using the proxmox kernel because you have to manually install it. The backports kernel is the default in OMV. Otherwise, when you login to the OMV web interface, the kernel version will have pve in it for proxmox, bpo for backports, or neither for standard.

    So if I'm perfectly happy with my OMV box running on OMV 4 for a very long time (don't have any interest in upgrading yet), is there any downside to moving over to the standard kernel (I checked, I am indeed on the backports - which I thought I was)? If I were to do that, how do I do it?

    The backports kernel will stop receiving updates as well (once stretch goes on extended support). If someone doesn't want to update, the really should be using the standard kernel.

    How can you tell if you're using the backports, standard or promox kernel?


    Sent from my BBF100-2 using Tapatalk

    If you can ping 8.8.8.8 (outside your network), but not google.com, it sounds like a DNS issue. If your network was broken, you wouldn't get a positive result from 8.8.8.8.


    Try 1.1.1.1 (Primary) and 1.0.0.1 (Secondary) for DNS. Those are cloudflare's servers and I rarely have problems with them.


    Sent from my BBF100-2 using Tapatalk

    Unfortunately, if the plugin did this, it might be changed by changing settings in the Network settings of OMV. I need to test this.

    That might explain why I had to edit the config file for OpenVPN. I set a static IP address in my router, and this is what my network settings are on my OMV. I don't think I missed any steps in installation, but that is distinctly possible.


    Capture.JPG

    I may backtrack on this and make it a plugin. After looking at docker options and how they work, I don't really like that method. We'll see. It won't happen on OMV 4.x though. Just OMV 5.x.

    That would be absolutely awesome. I'm planning on fiddling with the docker this weekend, but what I like about the OpenVPN plugin as it currently stands is the GUI that near as I can tell essentially translates into what the CLI needs. I would prefer that over dealing with a docker and a CLI (not that I'm afraid of it, but my Linux/Unix is rusty to say the least).


    If I had one suggestion going forward if you were going to make it a plugin - is give the ability in the GUI to add the IP address of the default network gateway. I was trying to route all traffic through my VPN and couldn't figure it out, until I edited the config file (default gateway if I remember was 192.168.1.0, and mine is 192.168.1.1).


    Either way, I'll make it work - thanks for all the semi-thankless work you do!

    No, I wasn't joking. While it hasn't been mainlined to the kernel yet, it has become extremely popular because its code base is much, much smaller and performance is better. Everything about its setup is much simpler as well.

    Cool, thanks! I'll have to look into it as the replacement for OpenVPN, so it's in place before OMV 5 comes out. My only complaint with OpenVPN was the performance, it wasn't as fast as I would have liked - but I liked how simple it was to setup (I used the Plugin, not the docker - missed the fact that it wasn't being ported over to OMV 5). If wireguard is easier - sounds like a win/win.


    Is it going to end up as a plugin for OMV 5, or a docker only?

    its both on linux
    I first used the OMV extras plugin and it created a database on an external SSD
    now im using Plex with docker and would preferably just let him use the same database on the external SSD

    This was the link that I found on the Plex support forum. Basically, you move everything over to the new data folders and then copy the library database info over. Windows has registry keys associated with it and that's why moving from Windows to Linux is a PITA.


    https://support.plex.tv/articl…p-plex-media-server-data/


    Sent from my BBF100-2 using Tapatalk

    Is your existing Plex database on Windows or Linux? There's instructions on the Plex site (or forums, don't remember) for copying from Linux to Linux, and Windows to Windows, they also have a post on going from Windows to Linux. I tried it (going from Windows to Linux) and it turned into such a giant PITA that I ended up rebuilding my database from scratch.


    And I have a huge library, with 600+ movies and 30+ TV Series, plus my music.


    Sent from my BBF100-2 using Tapatalk