Posts by sfcampbell

    I don't mean to derail your request pertaining to Rsync, but I'm only aware of ways to Rsync a directory to a package archive in a command line (using a pipe "|" option and subsequent packaging command). I don't know if an equivalent modification can be used in OMV's Rsync plugin.


    You could set up a CRON job to compress a local directory to a local compressed archive, then Rsync the compressed archive to your remote storage for upload to the cloud, but you would essentially transfer 100% of the file size (as opposed to a command line Rsync which would only modify updated portions of the archive). Furthermore creating naming conventions for each archive, then version-controlling archives to maintain only a maximum number of snapshots, all create additional headaches that I don't think you would want 😜


    I would instead recommend utilizing BorgBackup -- it's a server-hosted backup solution that automatically encrypts, de-duplicates, compresses, and maintains specified snapshot limits, all-in-one. One Borg server can have many Vaults: each vault can be used for individually unique clients, datasets/folders, containers [i.e. Docker or LXC], etc. so in effect it can be a single point of replication from many sources. You could then script syncing your Borg vault files to your cloud server as needed.


    OMV has a plugin to install it as a service [making Borg even easier than it is in CLI haha] and there are several client-side GUI and/or automating apps that are compatible with Borg.


    Hope this helps!

    For the sake of focusing on important items I will assume the spacing from your .yaml was correct, and it just didn't paste well in your post. A couple concerns stand out:


    - You said "the Dhcp in the router" which leads me to believe you're not running DHCP in PiHole; in which case you should comment-out port 67:67/udp.

    - The Environment section was not formatted correctly (not well demonstrated at https://github.com/pi-hole/docker-pi-hole/ but that's not uncommon for such a large project)

    - Your time zone is missing an 'e' at the end of "Europ".


    if I am incorrect, and you are using PiHole for DHCP, those lines should look more like the following. Not using DHCP remove port 67 and two lines for CAP_ADD:


    YAML
      ports:
        - '53:53/tcp'
        - '53:53/udp'
        - '67:67/udp'
        - '80:81/tcp'
      environment:
        - TZ=Europe/Madrid
        - WEBPASSWORD=[redacted]
      cap_add:
          - NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed


    Next, Networks and Services:

    There are duplicates in your network configuration, and the whole network configuration should be outside of the 'Service' portion of the yaml, for example:


    Recommended corrected yaml:

    In this example port 67:67 and CAP_ADD are commented out when not using DHCP -- if you wish to add DHCP uncomment these lines.

    I agree with many good points made above -- I frequently favor Docker containers over LXC & VM except when a Proxmox Cluster is available, which can provide more high availability & uptime than a single host. Clustering Docker containers can also be accomplished with Kubernetes or other services (Docker Swarm was supposed to do this but has been heavily criticized for inadequacies and networking issues).


    Overall there are extremely few services for which I have not found adequate solutions in Docker images!

    I have around 30 containers and I get the message that there are no available IP addresses left for new containers.

    That does sound very strange indeed -- I can't say I've run more than 30 containers on any single host but no doubt it should be capable of much more.

    I only have extremely basic experience with Docker networking as well, but three things come to mind:


    1) I don't think you want to manually modify daemon.json to configure networking -- there is a series of commands via docker network  that you should probably execute from shell instead.


    2) All of your running Docker containers are already assigned addresses & subnets -- I don't think manual/forced changes to the network while they're running will end well! In a "best case scenario" they might re-connect to a new network scope after a restart... but if they don't you'll lose IP connectivity to your existing containers.


    3) In what network do you suspect you've expended all available IP leases?(!) For example "bridge" or "host"? By default Docker networks are Class B ranges, which in theory can accommodate 65,534 hosts per network! How many containers are you running? 8| haha

    Code
    d782b1aa8ff8:/var/www/html# ./occ config:app:set files max_chunk_size --value 51200000
    
    Console has to be executed with the user that owns the file config/config.php
    Current user id: 0
    Owner id of config.php: 33
    Try adding 'sudo -u #33' to the beginning of the command (without the single quotes)
    If running with 'docker exec' try adding the option '-u 33' to the docker command (without the single quotes)

    Interesting :/ I assume you're consoled-in as root so I wouldn't have expected "Console has to be executed with the user that owns the file"...
    You are a step closer as it's no longer telling you "command not found". I believe UID #33 is "www-data" -- you can confirm with:


    cat /etc/passwd | grep 33


    ...in which case try the command as:


    sudo -u www-data ./occ config:app:set files max_chunk_size --value 51200000

    Thanks soma man this should be well easier than it is im getting occ not found

    BlueCoffee try directing the path to the OCC bin (which should be in the "/var/www/html" you were in):
    Either

    ./occ config:app:set files max_chunk_size --value 51200000
    or
    /var/www/html/occ config:app:set files max_chunk_size --value 51200000

    To confirm the bin is in that directory [which it should be!] you can verify with:

    ls -lah | grep occ

    And it is suprising why some kind a Docker runs outside of its box ? I installed Yacht while it was plugin of OMV-Extrass

    I can't speak as to the alleged Docker container's operating space, but this appearance of Yacht may be getting us a step closer to a solution to the issue(?) Can you [at least temporarily] suspend or remove the Yacht service installed via OMV-Extras and see if this makes a meaningful difference on your CPU utilization? (I've never used Yacht so I can't be sure; but that's my best guess based on the screenshots provided.)

    I am still struggling getting IPv6 to work in Docker.

    I got my bridge network to provide an IPv6 subnet but all connected containers do not receive an IPv6 up to here. Even when taking them down and redeploying them via compose.

    My brother from an OMV mother: No disrespect but I think you're way overcomplicating the IPv6 vs. IPv4 issue... IPv6 is only a constraint/problem to your gateway. Once external traffic hits your router [allegedly via IPv6 since you say your ISP forces this] you can shape the traffic "port-forwarded" into your LAN however you need it. I can't imagine any circumstance that utilizing IPv6 within your LAN domain is easier or necessary compared to IPv4.


    Why not just set up a reverse proxy i.e. NGINX Proxy Manager (even easily deployable as a Docker container!) and direct all port 80 & 443 port-forwards from your router to that address? Any IPv6 requests from WAN will be redirected to that host, even if the Proxy Manager is at a LAN IPv4 address -- likewise the Proxy Manager can access hosts at their respective IPv4 LAN address.

    Added bonus: set up your DNS Overrides on your LAN DNS service to redirect those FQDNs from within your LAN to the Proxy Manager and you'll be able to access the same services with the same FQDN whether you're within your LAN or away from home.

    Have you verified ports 8080 and 8443 don't conflict with other services currently in use?


    You can quickly check with:

    ss -tunlp | grep 8080

    ss -tunlp | grep 8443


    That "connection lost" at the end of compose is probably where the issue is occurring.

    ktasliel you can also find out more about the running process (such as parent processes triggering it, and sibling processes running with it) with the following commands:


    ps aux | grep uvicorn


    Reference the "PID" [the number in the 2nd column]. Per the screenshot from htop in your first post it will likely be "1754020" or similar. Once you have the PID you can then run:

    ps auxf

    ...which will show a *long* tree of all your running processes in hierarchical structure. Scroll down to the PID returned in the first command [i.e. 1754020] and examine the tree above and around that line to see what started uvicorn and what other services that parent is running.


    If you wish to post any of the results of the second command please only select the appropriate section of the tree that applies to this running process so others who are trying to assist don't have to search through the whole service tree to find it!

    do I have to replace the drive that is showing as Bad (sdd) and then tell it to rebuild?

    The array should still be accessible even in a "clean, degraded" state... After mounting have you created new 'Shared Folders' linking back to the newly re-mounted file system, or verified that existing shared folders are still pointing to the correct one?

    Glad to hear it's finally working as intended! There are a few things that may seem different, or seem like perhaps security isn't working as desired:

    - If your Windoze username + password matches a user on the NAS it could pass the credentials check without prompting you (depending on Samba settings, which is the name of the service that provides the SMB/CIFS protocols)

    - Once authenticated (either by username + password or Win credentials) your network session to the protected share will remain active until your next reboot


    Other than that it's great to hear that it's working well now!

    Lots of love to chente for legitimately emphasizing that RAID does not eliminate the needs for backups... that's on the user; not on him!


    Having said that if you're appropriately backing up your critical data [beyond the scope of this thread] I'd say

    1) an NVME is good for your OS (better life expectancy than a USB)

    2) how much docker data are you planning on storing in volumes?? Maybe enough for Docker data to share one NVME with your OS? (most of my larger Docker data is mapped to larger volumes on my network; yours could be mapped to a RAID for high resilience(?))

    3) if uptime is your priority [and you do have adequate backup solutions!] I'd go with the 4x 18tb RAID array to reduce the overhead loss of storage capacity from 50% to 25%. That would keep your network shares, media libraries, and applicable docker services running through a single disk failure until you replace and re-silver the array [18tb ea. would likely take over 24hrs from the time it's started]. Both BTRFS and ZFS have good qualities: usually the limitation for ZFS is ram capacity but you've got more than enough with 32gb for that array; but if you're not comfortable with ZFS command line management I *don't* recommend it for your critical data!


    All the above leaves out one NVME as potentially unnecessary... unless it was RAID1 with the first NVME? 🤔
    Could also help maintain that "uptime" in case of single-disk failure!

    eubyfied The RAID volume label "md0" is somewhat ubiquitous among MDADM users but it looks like your previously existing RAID was labeled "openmediavault" (per your first post). One of the two following may be the case:


    1) The newly mounted RAID array "md0" could be added to OMV by returning to "Storage" => "Software RAID" and mounting it as a new container

    or

    2) Unmount the RAID and repeat your previous steps substituting the portions of "md0" with "openmediavault"


    Hope this helps!

    Grogster Don't feel overwhelmed -- if you're just now expanding your horizons beyond Windoze and looking for KISS introduction to the open-source world: OMV is a great place to start! (I'm not a beneficiary in any way of the OMV project; I'm just a personal tech enthusiast that enjoys quality products where I find them i.e. OMV).


    I believe everything you wish to accomplish is comfortably provided within the web interface; I don't expect you'll run into any common tasks that will force you back to a command-line once it's up-and-running.


    The "KISS" rule for user/permisisons management for all IT systems (Windoze, Linux, email, network services, etc.) is always:

    1) Groups first

    2) Users second; assigned to appropriate groups

    3) Assign roles third; by groups when possible (unless unnecessary)


    There's nothing wrong with only setting up one user if it's a home network and you're the only user... just skip step 1! To crate a personal user in the web portal just click 'Users' => 'Users' and the "plus" symbol and complete the form. Always remember to click the check mark in the yellow banner with OMV! (New settings entered in the web portal aren't always applied to the system until you do.)


    Next go to 'Services' => 'SMB/CIFS' => 'Shares'; this is where you configure service-level options for the network file share service (these settings are only necessary for existing shares if you don't specify these options when creating new shares.) Set "Public" to 'No' -- this is the big divergence from Chris' video for shares to be Guest acessible. Also "Browsable" to 'Yes" only for the folders you want to be listed when navigating to the NAS' address in your file browser... Folders that are not "Browsable" can still be accessed by their full file path but will not be displayed in your file browser.


    Lastly go back to 'Storage' => 'Shared Folders' [no doubt where you started haha] -- here, you can click one share at a time, and click the icon above that looks like a folder with a key on it labeled "Permissions". Here you can set "Read/Write", "Read-Only", and "No Access" as appropriate for groups [or users; more likely in your case] to restrict access to each respective share on your NAS.


    Et voila: hopefully that will accomplish the necessary user-based security you're seeking for your NAS! Again don't forget to always click the check mark in the yellow banner when it appears following each configuration change 😜

    Correct; in my opinion. The assumption in my previous post is that your USB disk is an SSD and not a HDD(?) Either way a USB3 disk will likely have less latency than a third-party RAID from your RPi -- due largely to the RPi only having one gigabit ethernet port for communication with both the client and the NAS.


    There is an alternative that, while not as low-latency as a USB3 SSD, could offer you improved performance for network services: you could add a USB3 2.5gb ethernet adapter to your one available USB3 port and directly connect [crossover] it to your NAS. If your NAS isn't equipped with 2.5gb ethernet you might be able to add a USB adapter to it as well. This would increase your server bandwidth to/from your NAS and reduce latency compared to one gigabit ethernet performing both tasks through a switch.