- Which version of OMV are you running?
- Did you execute it with sudo?
Beiträge von Agricola
-
-
Maybe SnapRaid does not like a parity drive made up of two pooled drives. I have never seen such a setup. If you have four 8TB drives only one of them needs to be your parity drive. One parity drive will cover up to four data drives of equal or smaller size EACH. You are loosing the use of 8TB of data space! Try reconfiguring according to the documentation and then give it a try. Even if you had five data drives the two parity drives required would not be pooled.
-
Used gparted on them.
No, I meant did you set physical disk properties and wipe them in the GUI under Storage/Disks?
-
Did you wipe them first in disks tab?
-
I'm kind of confused by your description above. Could you please post a screenshot of your UFS page and your SnapRaid/Drives page?
-
Restart the container and look at the container log for clues. Here is my log file after a restart:
koppa Click on the little blue page icon next to your plex container.
-
You might be ahead by miles to just enable symlinks plugin and create simple and short symlinks to stand in for those long file paths. I knew nothing about the plugin before I installed it and it was pretty much self explanitory. If you fall back to a previous install (Do you backup your os?) you are prevented from moving forward with updates that provide security and feature advances over time.
-
It might be helpful to look at the example docker-compose.yml files you drew from. Pay particular close attention to spaces and trailing "/" in your yml. Looking back over your yml a few posts back I noticed several things that might be a problem if yo haven't corrected them already. There could be more. I didn't look at everything:
- Two slashes. There should only be one.
- The trailing slash after "media" should not be there, like in the line above with /appdata/sonarr:/config
- Again, the indentions matter.
Dont' give up!! You'll get it.
-
Obviously there is something I am not seeing. Even though I deployed Plex from the terminal using docker-compose. When I see there is an available update, I go to Portainer and restart the container causing it to update. It works for me every time.
-
Having installed the data I can browse (connecting as anonymous) to the shares.
A closer look at your first post made the above quote pop out at me. This combined with the question from sirga784 above makes me think that you need to set up some users and give them various permissions (either under the Users tab or the Shared Folders tab) and then uncheck "guests allowed" in Samba shares. In Permissions give one user "read/write", one "read only", and one "no access" and see what you can and cannot do. If you do want guests to be able to access your Samba shares create a "guest" user with the password "guest" and give him "read only" privileges for the Shared Folders you want him to have access to, and "no access" to the Shared Folders you do not want him to have access to. Even though I always check "guest's allowed" (that is how I learned it) I have wondered why, if you had multiple users in the home or your workplace, why would you want to have guests (whatever that means) to be allowed to access your samba shares. This thread has made me think I want to go back to all of my Samba shares and uncheck "guests allowed." Anybody else "trolling" around want to add your two cents on this? I would like to know what some of you grey beards think about this issue.
-
openmedianer here you go:
Code
Alles anzeigen--- version: "2" services: nextcloud: # Nextcloud server. image: ghcr.io/linuxserver/nextcloud container_name: nextcloud environment: - PUID=1000 - PGID=100 - TZ=America/Chicago volumes: - /srv/dev-disk-by-label-disk4/appdata/nextcloud:/config - /srv/dev-disk-by-label-disk4/nextcloud:/data depends_on: - mariadb # ports: # uncomment this and the next line if you want to bypass the proxy # - 450:443 restart: unless-stopped mariadb: # Needed for the Nextcloud database. image: ghcr.io/linuxserver/mariadb container_name: nextclouddb environment: - PUID=1000 - PGID=100 - MYSQL_ROOT_PASSWORD=xxxxxxxxx - TZ=America/Chicago volumes: - /srv/dev-disk-by-label-disk4/appdata/nextclouddb:/config restart: unless-stopped duckdns: # This section may not be needed. Info duplicated in swag section. image: ghcr.io/linuxserver/duckdns container_name: duckdns environment: - PUID=1000 - PGID=100 - TZ=America/Chicago - SUBDOMAINS=huey,dewey,louie,donald,daffy # these aren't my real subdomains. - TOKEN=xxxxxxxxxxx restart: unless-stopped swag: # for the reverse proxy. Letsencrypt has been depricated. image: linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - ONLY_SUBDOMAINS=true - PUID=1000 - PGID=100 - TZ=America/Chicago - URL=duckdns.org - SUBDOMAINS=huey,dewey,louie,donald,daffy # These aren't my real subdomains. - VALIDATION=http - EMAIL=xxx@xxx.com volumes: - /srv/dev-disk-by-label-disk4/appdata/swag:/config ports: - 444:443 - 81:80 restart: unless-stopped ubooquity: # An ebook server. image: ghcr.io/linuxserver/ubooquity container_name: ubooquity environment: - PUID=1000 - PGID=100 - TZ=America/Chicago - MAXMEM=1024 volumes: - /srv/dev-disk-by-label-disk4/appdata/ubooquity:/config - /srv/e4038090-8952-45cf-ba1a-582c310dc7fd/ubooquity/books:/books - /srv/e4038090-8952-45cf-ba1a-582c310dc7fd/ubooquity/comics:/comics - /srv/e4038090-8952-45cf-ba1a-582c310dc7fd/ubooquity/files:/files ports: - 2202:2202 - 2203:2203 restart: unless-stopped navidrome: # A music server. image: deluan/navidrome:latest container_name: navidrome environment: - PUID=1000 - PGID=100 - TZ=America/Chicago volumes: - /srv/dev-disk-by-label-disk4/appdata/navidrome:/data - /srv/e4038090-8952-45cf-ba1a-582c310dc7fd/navidrome/music:/music:ro ports: - 4533:4533 restart: unless-stopped airsonic: # An audiobook (or music) server. image: ghcr.io/linuxserver/airsonic container_name: airsonic environment: - PUID=1000 - PGID=100 - TZ=America/Chicago volumes: - /srv/dev-disk-by-label-disk4/appdata/airsonic:/config - /srv/e4038090-8952-45cf-ba1a-582c310dc7fd/airsonic/music:/music - /srv/e4038090-8952-45cf-ba1a-582c310dc7fd/airsonic/podcasts:/podcasts - /srv/e4038090-8952-45cf-ba1a-582c310dc7fd/airsonic/playlists:/playlists - /srv/e4038090-8952-45cf-ba1a-582c310dc7fd/airsonic/audiobooks:/audiobooks ports: - 4040:4040 restart: unless-stopped
After you deploy you will have to set up a proxy for each of the services you have in the yml file, using the subdomains you registered in duckdns and in the yml. The various proxy files are found in appdata/swag/nginx/proxy-confs/ with a .sample appended to each file. Find the file corresponding to the different services you are deploying in the yml file. In each of those xxx.subdomain.conf.sample files there is a line where it reads server_name airsonic.*;. That needs to be changed to server_name huey.*; and so on for each service. Save each proxy file without the .sample.
Obviously other services can be swapped out for the ones I have used here. Look through the proxy files provided in the folder mentioned above to see what is possible. It's probably easier to stay within the linuxserver family of dockers. I was able to include Navidrome because there just happened to be a proxy file included in the samples. If you feel adventurous there is even a generic version of the proxy files available.
- macom 's [How-To] on Nextcloud, along with the accompanying Q &A, is a foundational must-read regarding the details of the correct implementation of this docker-compose.yml file. If you can deploy Nextcloud using this [How-To] you can deploy ANYTHING using docker-compose (or Stacks.) Thanks macom .
- I would also like to give a hat tip to TechnoDadLife for his two Nextcloud videos [1] & [2] that started me wondering (a long time ago) why one would list five subdomains claimed with duckdns, and then only use one when setting up Nextcloud. Finally, a couple of months ago I patched together the above yml file and amazingly ... it worked! Thanks TDL.
-
I forget if it’s in shared folders tab or SMB tab but did you check “recursive” when creating the share?
-
openmedianer i can do that later today. I am away from the computer right now. I’m sorry but I use duckdns, so you will have to adjust accordingly.
-
I use one swag (letsencrypt) certificate (Zertikate) for four separate services, from one docker-compose.
-
Here's a link to a tutorial that shows you how to backup using dd for mac or linux and Win32 Disk Imager for Windows.
-
koppa take a look at these two guides by macom :
How to install Plex with docker-compose
Use Docker-compose in Portainer
I don’t have the titles exact but the links are good. My guess is you are building Plex in the Containers tab of Portainer. There’s nothing wrong with that. But if you learn how to create the container in the Stacks tab of Portainer (better yet, using docker-compose in the command line) it’s much easier to adjust and redeploy. That does not fix your problem now but in the long run it will make deployment easier.
Now, your problem at hand:
Go to your Containers and find the line that reads "plex" and click on the word by the check box on the left.
That gives you the following screen. Click on the button "Duplicate/restart"
Click on the Env tab and you will see the below view. Make sure the "Name" just reads VERSION and the "value" just reads public
When finished, click the "deploy" button. That should work.
-
Thanks. You filled in the last missing solution. Good job.
-
I don’t much care for the name either, and I really don’t like the Linuxserver blogpost that introduced it. In that blogpost they say “At this point, the SWAG and letsencrypt images are 100% compatible and we plan to keep SWAG backwards compatible as long as we can.” I wonder if that is still the case. At what point will it be necessary to advise switching to swag, especially if it is a brand new install that is having trouble getting started? Surely Linuxserver will let us know.
-
The last part, that is what confused me. Thanks for the info on duckdns. Nice to know and really makes more sense.
-
KM0201 I’m curious. You don’t have a duckdns section in your stack. I’ve always wondered why it was there since the pertinent information is repeated under swag. Is the duckdns container not necessary?
Also, you used port 457 in a couple of places and then 450 later on. Is that a typo?