Beiträge von sfcampbell

    I have around 30 containers and I get the message that there are no available IP addresses left for new containers.

    That does sound very strange indeed -- I can't say I've run more than 30 containers on any single host but no doubt it should be capable of much more.

    I only have extremely basic experience with Docker networking as well, but three things come to mind:


    1) I don't think you want to manually modify daemon.json to configure networking -- there is a series of commands via docker network  that you should probably execute from shell instead.


    2) All of your running Docker containers are already assigned addresses & subnets -- I don't think manual/forced changes to the network while they're running will end well! In a "best case scenario" they might re-connect to a new network scope after a restart... but if they don't you'll lose IP connectivity to your existing containers.


    3) In what network do you suspect you've expended all available IP leases?(!) For example "bridge" or "host"? By default Docker networks are Class B ranges, which in theory can accommodate 65,534 hosts per network! How many containers are you running? 8| haha

    Code
    d782b1aa8ff8:/var/www/html# ./occ config:app:set files max_chunk_size --value 51200000
    
    Console has to be executed with the user that owns the file config/config.php
    Current user id: 0
    Owner id of config.php: 33
    Try adding 'sudo -u #33' to the beginning of the command (without the single quotes)
    If running with 'docker exec' try adding the option '-u 33' to the docker command (without the single quotes)

    Interesting :/ I assume you're consoled-in as root so I wouldn't have expected "Console has to be executed with the user that owns the file"...
    You are a step closer as it's no longer telling you "command not found". I believe UID #33 is "www-data" -- you can confirm with:


    cat /etc/passwd | grep 33


    ...in which case try the command as:


    sudo -u www-data ./occ config:app:set files max_chunk_size --value 51200000

    Thanks soma man this should be well easier than it is im getting occ not found

    BlueCoffee try directing the path to the OCC bin (which should be in the "/var/www/html" you were in):
    Either

    ./occ config:app:set files max_chunk_size --value 51200000
    or
    /var/www/html/occ config:app:set files max_chunk_size --value 51200000

    To confirm the bin is in that directory [which it should be!] you can verify with:

    ls -lah | grep occ

    And it is suprising why some kind a Docker runs outside of its box ? I installed Yacht while it was plugin of OMV-Extrass

    I can't speak as to the alleged Docker container's operating space, but this appearance of Yacht may be getting us a step closer to a solution to the issue(?) Can you [at least temporarily] suspend or remove the Yacht service installed via OMV-Extras and see if this makes a meaningful difference on your CPU utilization? (I've never used Yacht so I can't be sure; but that's my best guess based on the screenshots provided.)

    I am still struggling getting IPv6 to work in Docker.

    I got my bridge network to provide an IPv6 subnet but all connected containers do not receive an IPv6 up to here. Even when taking them down and redeploying them via compose.

    My brother from an OMV mother: No disrespect but I think you're way overcomplicating the IPv6 vs. IPv4 issue... IPv6 is only a constraint/problem to your gateway. Once external traffic hits your router [allegedly via IPv6 since you say your ISP forces this] you can shape the traffic "port-forwarded" into your LAN however you need it. I can't imagine any circumstance that utilizing IPv6 within your LAN domain is easier or necessary compared to IPv4.


    Why not just set up a reverse proxy i.e. NGINX Proxy Manager (even easily deployable as a Docker container!) and direct all port 80 & 443 port-forwards from your router to that address? Any IPv6 requests from WAN will be redirected to that host, even if the Proxy Manager is at a LAN IPv4 address -- likewise the Proxy Manager can access hosts at their respective IPv4 LAN address.

    Added bonus: set up your DNS Overrides on your LAN DNS service to redirect those FQDNs from within your LAN to the Proxy Manager and you'll be able to access the same services with the same FQDN whether you're within your LAN or away from home.

    Have you verified ports 8080 and 8443 don't conflict with other services currently in use?


    You can quickly check with:

    ss -tunlp | grep 8080

    ss -tunlp | grep 8443


    That "connection lost" at the end of compose is probably where the issue is occurring.

    ktasliel you can also find out more about the running process (such as parent processes triggering it, and sibling processes running with it) with the following commands:


    ps aux | grep uvicorn


    Reference the "PID" [the number in the 2nd column]. Per the screenshot from htop in your first post it will likely be "1754020" or similar. Once you have the PID you can then run:

    ps auxf

    ...which will show a *long* tree of all your running processes in hierarchical structure. Scroll down to the PID returned in the first command [i.e. 1754020] and examine the tree above and around that line to see what started uvicorn and what other services that parent is running.


    If you wish to post any of the results of the second command please only select the appropriate section of the tree that applies to this running process so others who are trying to assist don't have to search through the whole service tree to find it!

    Glad to hear it's finally working as intended! There are a few things that may seem different, or seem like perhaps security isn't working as desired:

    - If your Windoze username + password matches a user on the NAS it could pass the credentials check without prompting you (depending on Samba settings, which is the name of the service that provides the SMB/CIFS protocols)

    - Once authenticated (either by username + password or Win credentials) your network session to the protected share will remain active until your next reboot


    Other than that it's great to hear that it's working well now!

    Lots of love to chente for legitimately emphasizing that RAID does not eliminate the needs for backups... that's on the user; not on him!


    Having said that if you're appropriately backing up your critical data [beyond the scope of this thread] I'd say

    1) an NVME is good for your OS (better life expectancy than a USB)

    2) how much docker data are you planning on storing in volumes?? Maybe enough for Docker data to share one NVME with your OS? (most of my larger Docker data is mapped to larger volumes on my network; yours could be mapped to a RAID for high resilience(?))

    3) if uptime is your priority [and you do have adequate backup solutions!] I'd go with the 4x 18tb RAID array to reduce the overhead loss of storage capacity from 50% to 25%. That would keep your network shares, media libraries, and applicable docker services running through a single disk failure until you replace and re-silver the array [18tb ea. would likely take over 24hrs from the time it's started]. Both BTRFS and ZFS have good qualities: usually the limitation for ZFS is ram capacity but you've got more than enough with 32gb for that array; but if you're not comfortable with ZFS command line management I *don't* recommend it for your critical data!


    All the above leaves out one NVME as potentially unnecessary... unless it was RAID1 with the first NVME? 🤔
    Could also help maintain that "uptime" in case of single-disk failure!

    eubyfied The RAID volume label "md0" is somewhat ubiquitous among MDADM users but it looks like your previously existing RAID was labeled "openmediavault" (per your first post). One of the two following may be the case:


    1) The newly mounted RAID array "md0" could be added to OMV by returning to "Storage" => "Software RAID" and mounting it as a new container

    or

    2) Unmount the RAID and repeat your previous steps substituting the portions of "md0" with "openmediavault"


    Hope this helps!

    Grogster Don't feel overwhelmed -- if you're just now expanding your horizons beyond Windoze and looking for KISS introduction to the open-source world: OMV is a great place to start! (I'm not a beneficiary in any way of the OMV project; I'm just a personal tech enthusiast that enjoys quality products where I find them i.e. OMV).


    I believe everything you wish to accomplish is comfortably provided within the web interface; I don't expect you'll run into any common tasks that will force you back to a command-line once it's up-and-running.


    The "KISS" rule for user/permisisons management for all IT systems (Windoze, Linux, email, network services, etc.) is always:

    1) Groups first

    2) Users second; assigned to appropriate groups

    3) Assign roles third; by groups when possible (unless unnecessary)


    There's nothing wrong with only setting up one user if it's a home network and you're the only user... just skip step 1! To crate a personal user in the web portal just click 'Users' => 'Users' and the "plus" symbol and complete the form. Always remember to click the check mark in the yellow banner with OMV! (New settings entered in the web portal aren't always applied to the system until you do.)


    Next go to 'Services' => 'SMB/CIFS' => 'Shares'; this is where you configure service-level options for the network file share service (these settings are only necessary for existing shares if you don't specify these options when creating new shares.) Set "Public" to 'No' -- this is the big divergence from Chris' video for shares to be Guest acessible. Also "Browsable" to 'Yes" only for the folders you want to be listed when navigating to the NAS' address in your file browser... Folders that are not "Browsable" can still be accessed by their full file path but will not be displayed in your file browser.


    Lastly go back to 'Storage' => 'Shared Folders' [no doubt where you started haha] -- here, you can click one share at a time, and click the icon above that looks like a folder with a key on it labeled "Permissions". Here you can set "Read/Write", "Read-Only", and "No Access" as appropriate for groups [or users; more likely in your case] to restrict access to each respective share on your NAS.


    Et voila: hopefully that will accomplish the necessary user-based security you're seeking for your NAS! Again don't forget to always click the check mark in the yellow banner when it appears following each configuration change 😜

    Correct; in my opinion. The assumption in my previous post is that your USB disk is an SSD and not a HDD(?) Either way a USB3 disk will likely have less latency than a third-party RAID from your RPi -- due largely to the RPi only having one gigabit ethernet port for communication with both the client and the NAS.


    There is an alternative that, while not as low-latency as a USB3 SSD, could offer you improved performance for network services: you could add a USB3 2.5gb ethernet adapter to your one available USB3 port and directly connect [crossover] it to your NAS. If your NAS isn't equipped with 2.5gb ethernet you might be able to add a USB adapter to it as well. This would increase your server bandwidth to/from your NAS and reduce latency compared to one gigabit ethernet performing both tasks through a switch.

    The USB 3.0 would likely have less latency than external CIFS/NAS... at least assuming the USB connected disk is an SSD not a HDD(?) USB 3.0 is close to the same bandwidth as gigabit ethernet but you'd be cutting out the "middleman" that would be your NAS. Additionally if your NAS disks are enabled to sleep when not in use you'd be reducing the delay of the disks spinning up on demand.

    ...I believe that will be the easiest, fastest, secure way to copy it to the EXT4 external drive. Is that correct?

    qu4ntumrush Assuming your OMV host hardware is RPi 4 [referring to your second post] I presume you still have at least one USB3 port available(?) You could also locally connect your NTFS disks, one at a time, and use an "rsync" command to one-and-done move the files to the destination EXT4 drive. That would likely be faster if you continue to experience undiagnosed LAN latency issues.

    @jboesh I have to apologize because I think I misinterpreted your question in my previous post: I may have incorrectly gotten "backwards" what you were trying to accomplish. I thought you were trying to get OMV on your RPi to access volume storage within Docker that was running on your QNAP NAS...

    I was able to solve the issue by using the absolute path to the shared folder.

    So in my case /media was not working, but this one is...as docker volume:


    - /srv/remotemount/media:/photoprism/originals:ro

    This reply is actually exactly what I was recommending for the docker container -- I also host Photoprism in almost the exact same way (though the machine hosting the service is mapped to the NAS rather than RemoteMount). I hope it works for you!


    I blame waking up too early and posting before coffee 😆