My ombi container is named ombi, my letsencrypt container is named letsencrypt. I have a CNAME record for ombi.mydomain.com which appears to try to work but I get a "the server took too long to respond" when I try to go to https://ombi.mydomain.com.
Oh, and in case it matters, I registered my domain at domain.com and pointed it at a DuckDNS subdomain (like lh1983letsencrypt.duckdns.org) that I set up to always find my WAN address. I then went to CloudFlare and registered a free account and am using CloudFlare's nameservers through my domain.com address.
The logs dont usually lie If you use dns-validation you dont have to expose port 80 btw.
The next step would be the right configuration of the ombi.subdomain.conf in config/nginx/proxy-conf/ (did you create a c-name for ombi?).
So the CNAME thing I'm not sure I did right. I pointed a CNAME record with * as the subdomain at my domain's main URL.
For the ombi.subdomain.conf I just ran "mv ombi.subdomain.conf.sample ombi.subdomain.conf" and left it at that. Do I need to change something in the file itself?
Getting the certificates seemed to work, at least my docker logs -f indicated it did. I set up LECode
I followed that guide, but I cannot get ombi.$domain.com to pull up my ombi instance. I have a port forwarding rule sending all traffic on 443 to port 450 of my OMV machine (which I set in the Docker file for Letsencrypt to be the in-container port for 443). Still nothing.
I currently have my OMV machine set up with a static IP (192.168.1.5) behind a pfSense appliance which handles routing (running pfsense 2.4). I have a dynamic DNS service through DuckDNS and a domain name reserved. What I would like to do is set up a reverse proxy so that I can expose certain applications for my users to log in, like Ombi, to make requests for things to add to Plex.
I started looking at guides for Letsencrypt and Nginx, but couldn't get that to work. I know that pfsense has a haproxy app, but the configuration guides I found were less than helpful at getting it all set up.
What's the smartest way to accomplish what I want?
What the current plan is (shout out to Reddit for helping me here) is to migrate my server to a rackmount and use a PCIe card and a DAS (basically more disks) to set up my system with two mergerfs pools and rsync between them. Once the initial rsync is done, shut down the old system, do one last rsync, and then remove the old disks.
I have also taken to heart the danger in having 22TB and no RAID, and I've ordered 4 more 10TBs so I can set up a proper raid and have a failsafe, plus a new server chassis/rack solution, which should be more extensible in the future than a simple tower.
Sure thing. I'm in no rush.
Can anyone give me some more advice on this? I really need to migrate everything over to the new drives but am afraid I don't know how.
Here's the current setup:
OMV Box (The Vault): 10 disks of various size, totaling 28TB, 22TB used. In a Cooler Master mid-tower chassis. No free SATA ports on motherboard. Two disks in a USB3 docking station (with a spare bay). OMV system is on a SSD. 10 other disks are in a mergefs pool.
What I want to do: swap old 10 disks to 4 new disks, growing my mergefs pool to 40TB. I will add more disks later to do a Snapraid but right now I just want my data on new disks.
What I have: 4-bay USB3 docking station, time, middling know-how. Not afraid of CLI, rsync, or anything like that. I have a windows PC nearby and a dedicated screen for the NAS.
Sorry it's taken me so long to respond. Work's been crazy.
Yes, I have an external drive dock.
There's about 8TB left on the merged drive.
Just a regular tower chassis, no free drives, no free SATA slots on my motherboard.
Existing path, most free space. Didn't change from the default.
I have a regular tower chassis, but it's pretty big. I don't have any free slots for drives, which is why I was hooking things up with a USB3 docking station.
Is there something with WinSCP that I can do that I can't do with my Samba shares or SSH into my OMV machine from PuTTY?
Thanks for the advice. I want to set up SnapRAID too, but I don't know how.
So on the merged drive I have: my raw media files and my plex database (nothing else). All of my docker containers and OMV system are on a SSD. What I want to do is copy all of my media and my Plex database to the new, larger drives.
I don't know what a "storage policy" is (I suppose "retain everything" fits) and yes, I am familiar with the command line.
Like, let's say my mergefs partition is called "media," because it is. And it has as subfolders /Music, /TV, /Movies, /Comics, etc.
Can I mount a new partition, call it media2, and then just use rsync or whatever to transfer /media/Music to /media2/Music, etc., and then, when I've transferred everything, umount /media, take the system down, swap out the disks, and remount /media2 as the new /media in OMV?
I have a ~28TB mergefs partition that I've cobbled together from a lot of small disks. However, I'm starting to worry about hard drive performance, because these disks are 4-5 years old at this point. I want to swap them out to a set of 4 10TB hard drives I just bought, but what is the most economical way to clone my existing mergefs partition so that I don't have to rebuild my entire OMV system from scratch? I have about ~21TB of data.
Doesn't Sonarr have built-in RSS support?
you need a space between your option "-R" and 755, and you need to make sure you run the chmod command when you're already in /var/lib. Personally, I would run the command as:
"chmod -R 755 /var/lib/docker" just because I want to make sure I'm running on the right file. To avoid having to type it all out, use tab-complete. When you type "/var/lib/d" for example, you should be able to press your tab key and have it auto-complete docker. It makes typing command line commands so much easier.
Also, this is just a matter of personal preference, but I never use capitals in my users or groups because you never know when software is going to get touchy about them.
Next, I'm not sure that would work, because I don't know what user is trying to run the docker program. If "Admin" is the user that actually runs the docker daemon, you're probably OK. But I'm not familiar enough with Docker to know how it actually runs, so someone smarter than me may have to chime in here. It looks like on my install, /var/lib/docker is drwx--x--x, with user:group as root:root, so I don't know if I'd monkey with /var/lib/docker. To change it back, you'd run "chmod -R 711 /var/lib/docker" and "chown -R root:root /var/lib/docker".
I was thinking more for whatever directory resilio was looking to sync from, which shouldn't be /var/lib/docker but whatever syncs you've set up in resilio (which I've only used sporadically). So for example, let's say you've got a hard drive set up as "MediaDrive" and it's got a folder on there called "Documents" and you want to sync everything in Documents with resilio to a remote share. You'd need to make sure that /media/MediaDrive/Documents/ was set up with permissions so that resilio (and resilio's internal user) could read and maybe write the files. So the chmod you'd run would be on /media/MediaDrive/Documents/, not /var/lib/docker. A "chmod -R 775 /media/MediaDrive/Documents/" would be the right command to use.
So the GUI in OMV is only going to show you the images you've already pulled. Helpfully, OMV has a search box (top right) which, when you search in there, will show you everything registered at DockerHub.
The drive will work just fine, until it stops working, of course. No one can really predict when that drive will fail. Typically, people use WD Red or other NAS hard drives for NAS operations because they're rated for more read/write cycles than Blue, Green, or Black WD drives (or whatever the equivalent in Seagate is; I use WD Reds). That's not to say the blue drive won't work, but I'd keep SMART monitoring on and check to make sure it's good every couple of months so you can swap it out if need be.
Sorry, you'd run chmod as a command from the command line when you SSH into your OMV machine, or use the Shellinabox plugin.
The docker plug-in is really just a small plug-in to manage docker containers. Think of each docker container a mini-PC running on your OMV system. Rather than try to make your OMV system itself hospitable to each application, the docker system lets you run lots of small, individual systems tailored to the app. This is a godsend because now you're not managing different apps against different versions of common libraries or even the base system. OMV is built on top of Debian, for example, but you can run a dockerized app on top of Alpine Linux or whatever. You can have Python 2.7 installed in one container for an older app, and Python 3.x installed in another container for a newer one, without the headaches that would normally bring.
So the way Docker interfaces with your "host" system is that you have to map your host drives to the docker container. For example, let's say you're like me and running the docker container "portainer" to manage your dockers. So you create a directory on your host system where the docker info for portainer is going to live, in my case, /opt/portainer. I map that to the docker container in the settings as -v /opt/portainer:/config, since /config is where I know my docker container expects the files to be.
I can then use my command-line interface from my OMV system to go to /opt, and run "chmod -R 755 portainer." This sets folder permissions on portainer as 755, or "owner may read, write, execute; group and other may read and execute." I would also need to run "chown user:group portainer" are the user/group I want to "own" portainer, for example "chown docker:users portainer" would set it so that the user "docker" was the owner of the portainer folder, and any user in the "users" group would be able to read and execute, but not write, in the portainer folder.
If you're running into permissions issues, you could, temporarily, set the folder to "read write execute everyone," or 777. But ordinarily you don't want any device exposed to a public connection to have execute access for anyone but the owner of the folder, because it could lead to people using XSS attacks. So you could change it to 777 temporarily, let resilio set up the folders it needs, then switch it back to 755 or even 666/644 (read/write access only) as the case may be, since theoretically resilio only needs to read your local file structure to sync with the remote host.
Of course, if the box you're dealing with isn't exposed to the internet at all, then there's no reason why you couldn't just leave it at 777 all the time, since you'd need local access to screw anything up.