Yes my C drive.
Posts by Majorpayne
-
-
I seem to be running out of space and I was hoping there is a easy way to look at the system drive and tell what files are taking up space... I'm worried through my tinkering that I saved a couple of large files to my C drive.
currently sitting at 187gb used out of 256
-
The user and group "911" is a Deluge created system user and group. If you look at Deluge's config directory, you'll see it in permissions. Similarly, if you look at Sonarr's config folder, you'll see the "users" group and user "dockeruser" with their PGID and PUID's. (I now understand why Sonarr crashed, without a user and group ID's assigned.)
Sorry for the confusing start. Once committed, I should have config'ed it up instead of trying to do it from memory. Like they say, the devil is in the details. In any case, all's well that ends well.I have to say, I learned a few things in this process. I used to wonder about these packages, what they did, etc., but without a reason to do it, I wouldn't have loaded them up. Now, with a peek under the hood, I'm giving thought to configuring up Sonarr and Deluge on a dedicated ARM board.
Regards
Yeah, I just like not seeing that error message all the time. I appericate all the help as it helps me learn more of this. I used to run linux for work back in the day but it was such a long time ago and i've lost 90% of the knowledge at least.
-
I have it mostly working now. Sonarr is speaking with Deluge and Jackett. Deluge is saving the file to the proper place. Sonarr is not even attempting to check for the download show. I'm very close now.
EDIT: Ok found out that a permission was missing and it made the group 911 which i assume means help something is wrong?
I changed it and now it's able to move/extract the file.
-
OK @majorpayne. As I mentioned before, all of my path mapping is done in a single storage container, which is then attached to every other container. The paths are s follows:
I use rutorrent instead of Deluge for torrent downloading. The concept, though, is the same. Here is a screenshot of the downloads directory as specified in the rutorrent web UI itself. Notice that the download directory in rutorrent is a match for one of the paths on the right side of my storage container, both labeled by "1."Your Deluge container has a path mapped as "/downloads" and therefore you should also type "/downloads" into Deluge when it asks for your download directory. My suspicion is that you have the full /srv/... path):
This pic is stolen from a Deluge setup guide. The label 2 is where you should have typed /downloads.Similarly, I use NZBGet to download shows for Sonarr and movies for Radarr from Usenet. Again, the paths I specify inside the NZBGet interface have to be a perfect match for the paths specified on the RIGHT side of the Docker container, marked with "2," "3," and "4." See below:
What container do you use for storage? Is it safe to assume that you are using a docker container for that?
-
0d521c5c1edb
-
sigh, now i'm getting 192.168.0.15 refused to connect. when attempting port 8989 or 8112
-
Because every once in a while sonarr doesn't move a show over, I can only assume it's because of this error
You are running an old and unsupported version of Mono. Please upgrade Mono for improved stability.The last time i attempted to update mono i screwed the update management to the point I wasn't able to get new updates and had to resinstall OMV
-
-
I've uploaded my file structure in case it helps you help me
I'm going to remake the docker containers in case i missed a step.
-
I can honestly say that i'm lost lol.
I'll read everything again and hope i understand it
-
I've done option 2 before. Symlink's are great for getting past program limitations and other weirdness. Symlink's can also be used in a manner similar to mergerfs; to spread storage over multiple drives while providing the appearance and function of storing all data on one drive.
With Symlink's and remote mount, one can transparently export/import data from/to remote servers. With the two, there's lots of flexibility for moving data around.
__________________________________Do you actually have all 9TB backed up?
No, I do not have all 9 TB's backed up. Only about 5 TB's currently.
-
Yep, I'm lost lol attempting option 2
-
Well, that's about as open as it gets. With others having Read/Write, even the root account inside the container (different from OMV's root account, BTW) should be able to write to it.
This points back to the container and/or the installation parameters. You could try a Docker re-install.
_____________________________________________There are two scenarios that I would try:
First:
This time around, let the container create the folder "Downloads". Do not "pre-create" it. This will mean that you'd need to delete the existing share and the "Downloads" folder (Copy any existing data to another location.) This process allows the appropriate account to create the Downloads folder and assign permissions that the container can use.
You can always loosen permissions, later, with the Users group and with "Others" permissions.(OR)
Do the same as the above but map directly to the root of your data drive. I.E. /srv/9a94fceb-ff72-4c70-9562-591fcc600b9e/Downloads. You could share it from there.
Second - if the above doesn't work:
Don't try to map directly to the data drive, in Volumes and Bind points. Try the following which will create a downloads folder on the root of the OMV boot drive. (The container will be able to write to this host folder.)
Host Path-----Container Path
/downloads to /downloadsInstall the Symlink plugin and use it to connect /downloads, on the OMV boot drive, to /Downloads on the data drive.
In the Symlink dialog: The source will be the boot drive /downloads and the destination will be the data drive /download. Since OMV's root account will be in control of the Symlink, data will be moved to your data drive. As long as root is the owner, you'll be able to assign permissions on the data drive end of the link as you like, share it, etc.
_____________________________________________
As an FYI:
I used to have a "ServerFolders" directory at the root of my data drive(s). I'm pretty tight with permissions and found, in some permissions scenarios; creating sub-dir's below ServerFolders was a PITA. So, I restructured to get rid of it. Now, I create network shares at the root of data drive(s).Option 1 not possible as All 9 TB of files are sitting in /srv/disk name/Fileserver, I will trying option 2
-
OK, you have a folder called "Fileserver" in the path. Apply the same permissions that were applied to "Downloads", to "Fileserver". (Note that the reset perm's plugin won't help you with this.)
I've experienced the same sort of problem with permissions when I shared a sub-directory, under a root folder, of the data drive. If a shared folder is not at the root of the drive, the entire path to the shared folder needs to have compatible permissions. Change Fileserver to "Others", "Read / Write".This is why I asked if you have WinSCP installed. It's far easier to look at standard, unshared folders and change their permissions, with WinSCP, versus doing the same on the command line.
________________________________________________The above is an attempt to resolve the "denied" error dialog you provided. Since I don't use this particular Docker, I don't know how the container interacts with the host.
My Apologies I did not see the question about WinSCP. Yes I do have it installed.
Edit;
DetailsMessage
Import failed, path does not exist or is not accessible by Sonarr: /srv/9a94fceb-ff72-4c70-9562-591fcc600b9e/Fileserver/Downloads/DCs.Legends.of.Tomorrow.S03E10.iNTERNAL.720p.WEB.x264-BAMBOOZLE -
I did, I set it to everyone same issues
-
it's my own server behind a pfsense firewall should be secure enough not to worry about access
-
Can you show me a screenshot of what you are talking about. I might be doing this already.
-
Yeah I reset the permissions to Administrator: read/write, Users: read/write, Others: read-only and nothing
-
Anyone else know my failings?