Posts by sfcampbell

    And it is suprising why some kind a Docker runs outside of its box ? I installed Yacht while it was plugin of OMV-Extrass

    I can't speak as to the alleged Docker container's operating space, but this appearance of Yacht may be getting us a step closer to a solution to the issue(?) Can you [at least temporarily] suspend or remove the Yacht service installed via OMV-Extras and see if this makes a meaningful difference on your CPU utilization? (I've never used Yacht so I can't be sure; but that's my best guess based on the screenshots provided.)

    I am still struggling getting IPv6 to work in Docker.

    I got my bridge network to provide an IPv6 subnet but all connected containers do not receive an IPv6 up to here. Even when taking them down and redeploying them via compose.

    My brother from an OMV mother: No disrespect but I think you're way overcomplicating the IPv6 vs. IPv4 issue... IPv6 is only a constraint/problem to your gateway. Once external traffic hits your router [allegedly via IPv6 since you say your ISP forces this] you can shape the traffic "port-forwarded" into your LAN however you need it. I can't imagine any circumstance that utilizing IPv6 within your LAN domain is easier or necessary compared to IPv4.


    Why not just set up a reverse proxy i.e. NGINX Proxy Manager (even easily deployable as a Docker container!) and direct all port 80 & 443 port-forwards from your router to that address? Any IPv6 requests from WAN will be redirected to that host, even if the Proxy Manager is at a LAN IPv4 address -- likewise the Proxy Manager can access hosts at their respective IPv4 LAN address.

    Added bonus: set up your DNS Overrides on your LAN DNS service to redirect those FQDNs from within your LAN to the Proxy Manager and you'll be able to access the same services with the same FQDN whether you're within your LAN or away from home.

    Have you verified ports 8080 and 8443 don't conflict with other services currently in use?


    You can quickly check with:

    ss -tunlp | grep 8080

    ss -tunlp | grep 8443


    That "connection lost" at the end of compose is probably where the issue is occurring.

    ktasliel you can also find out more about the running process (such as parent processes triggering it, and sibling processes running with it) with the following commands:


    ps aux | grep uvicorn


    Reference the "PID" [the number in the 2nd column]. Per the screenshot from htop in your first post it will likely be "1754020" or similar. Once you have the PID you can then run:

    ps auxf

    ...which will show a *long* tree of all your running processes in hierarchical structure. Scroll down to the PID returned in the first command [i.e. 1754020] and examine the tree above and around that line to see what started uvicorn and what other services that parent is running.


    If you wish to post any of the results of the second command please only select the appropriate section of the tree that applies to this running process so others who are trying to assist don't have to search through the whole service tree to find it!

    do I have to replace the drive that is showing as Bad (sdd) and then tell it to rebuild?

    The array should still be accessible even in a "clean, degraded" state... After mounting have you created new 'Shared Folders' linking back to the newly re-mounted file system, or verified that existing shared folders are still pointing to the correct one?

    Glad to hear it's finally working as intended! There are a few things that may seem different, or seem like perhaps security isn't working as desired:

    - If your Windoze username + password matches a user on the NAS it could pass the credentials check without prompting you (depending on Samba settings, which is the name of the service that provides the SMB/CIFS protocols)

    - Once authenticated (either by username + password or Win credentials) your network session to the protected share will remain active until your next reboot


    Other than that it's great to hear that it's working well now!

    Lots of love to chente for legitimately emphasizing that RAID does not eliminate the needs for backups... that's on the user; not on him!


    Having said that if you're appropriately backing up your critical data [beyond the scope of this thread] I'd say

    1) an NVME is good for your OS (better life expectancy than a USB)

    2) how much docker data are you planning on storing in volumes?? Maybe enough for Docker data to share one NVME with your OS? (most of my larger Docker data is mapped to larger volumes on my network; yours could be mapped to a RAID for high resilience(?))

    3) if uptime is your priority [and you do have adequate backup solutions!] I'd go with the 4x 18tb RAID array to reduce the overhead loss of storage capacity from 50% to 25%. That would keep your network shares, media libraries, and applicable docker services running through a single disk failure until you replace and re-silver the array [18tb ea. would likely take over 24hrs from the time it's started]. Both BTRFS and ZFS have good qualities: usually the limitation for ZFS is ram capacity but you've got more than enough with 32gb for that array; but if you're not comfortable with ZFS command line management I *don't* recommend it for your critical data!


    All the above leaves out one NVME as potentially unnecessary... unless it was RAID1 with the first NVME? 🤔
    Could also help maintain that "uptime" in case of single-disk failure!

    eubyfied The RAID volume label "md0" is somewhat ubiquitous among MDADM users but it looks like your previously existing RAID was labeled "openmediavault" (per your first post). One of the two following may be the case:


    1) The newly mounted RAID array "md0" could be added to OMV by returning to "Storage" => "Software RAID" and mounting it as a new container

    or

    2) Unmount the RAID and repeat your previous steps substituting the portions of "md0" with "openmediavault"


    Hope this helps!

    Grogster Don't feel overwhelmed -- if you're just now expanding your horizons beyond Windoze and looking for KISS introduction to the open-source world: OMV is a great place to start! (I'm not a beneficiary in any way of the OMV project; I'm just a personal tech enthusiast that enjoys quality products where I find them i.e. OMV).


    I believe everything you wish to accomplish is comfortably provided within the web interface; I don't expect you'll run into any common tasks that will force you back to a command-line once it's up-and-running.


    The "KISS" rule for user/permisisons management for all IT systems (Windoze, Linux, email, network services, etc.) is always:

    1) Groups first

    2) Users second; assigned to appropriate groups

    3) Assign roles third; by groups when possible (unless unnecessary)


    There's nothing wrong with only setting up one user if it's a home network and you're the only user... just skip step 1! To crate a personal user in the web portal just click 'Users' => 'Users' and the "plus" symbol and complete the form. Always remember to click the check mark in the yellow banner with OMV! (New settings entered in the web portal aren't always applied to the system until you do.)


    Next go to 'Services' => 'SMB/CIFS' => 'Shares'; this is where you configure service-level options for the network file share service (these settings are only necessary for existing shares if you don't specify these options when creating new shares.) Set "Public" to 'No' -- this is the big divergence from Chris' video for shares to be Guest acessible. Also "Browsable" to 'Yes" only for the folders you want to be listed when navigating to the NAS' address in your file browser... Folders that are not "Browsable" can still be accessed by their full file path but will not be displayed in your file browser.


    Lastly go back to 'Storage' => 'Shared Folders' [no doubt where you started haha] -- here, you can click one share at a time, and click the icon above that looks like a folder with a key on it labeled "Permissions". Here you can set "Read/Write", "Read-Only", and "No Access" as appropriate for groups [or users; more likely in your case] to restrict access to each respective share on your NAS.


    Et voila: hopefully that will accomplish the necessary user-based security you're seeking for your NAS! Again don't forget to always click the check mark in the yellow banner when it appears following each configuration change 😜

    Correct; in my opinion. The assumption in my previous post is that your USB disk is an SSD and not a HDD(?) Either way a USB3 disk will likely have less latency than a third-party RAID from your RPi -- due largely to the RPi only having one gigabit ethernet port for communication with both the client and the NAS.


    There is an alternative that, while not as low-latency as a USB3 SSD, could offer you improved performance for network services: you could add a USB3 2.5gb ethernet adapter to your one available USB3 port and directly connect [crossover] it to your NAS. If your NAS isn't equipped with 2.5gb ethernet you might be able to add a USB adapter to it as well. This would increase your server bandwidth to/from your NAS and reduce latency compared to one gigabit ethernet performing both tasks through a switch.

    The USB 3.0 would likely have less latency than external CIFS/NAS... at least assuming the USB connected disk is an SSD not a HDD(?) USB 3.0 is close to the same bandwidth as gigabit ethernet but you'd be cutting out the "middleman" that would be your NAS. Additionally if your NAS disks are enabled to sleep when not in use you'd be reducing the delay of the disks spinning up on demand.

    ...I believe that will be the easiest, fastest, secure way to copy it to the EXT4 external drive. Is that correct?

    qu4ntumrush Assuming your OMV host hardware is RPi 4 [referring to your second post] I presume you still have at least one USB3 port available(?) You could also locally connect your NTFS disks, one at a time, and use an "rsync" command to one-and-done move the files to the destination EXT4 drive. That would likely be faster if you continue to experience undiagnosed LAN latency issues.

    @jboesh I have to apologize because I think I misinterpreted your question in my previous post: I may have incorrectly gotten "backwards" what you were trying to accomplish. I thought you were trying to get OMV on your RPi to access volume storage within Docker that was running on your QNAP NAS...

    I was able to solve the issue by using the absolute path to the shared folder.

    So in my case /media was not working, but this one is...as docker volume:


    - /srv/remotemount/media:/photoprism/originals:ro

    This reply is actually exactly what I was recommending for the docker container -- I also host Photoprism in almost the exact same way (though the machine hosting the service is mapped to the NAS rather than RemoteMount). I hope it works for you!


    I blame waking up too early and posting before coffee 😆

    reeneex Here's some advice from my personal opinion:


    - Hardware RAID only really exists in dedicated SATA/SAS RAID controller cards -- it doesn't sound like you'll be using that in your build... and it's really an unnecessary expense for home/small business/developmental purposes.

    - Software RAID has become extremely flexible and easily manageable with modern open-source tools. MDADM seems to be the most prominent Soft RAID service currently and you can do just about anything with it! Even install OMV on a RAID array [I've done it, but won't go into more detail about it here 😜]


    - RAID 1 is just the most basic level of fault-tolerance for people who aren't willing to invest in more than 2 disks, or only need very little data capacity. 50% loss of storage space to overhead is just too much for most people willing (or needing) better storage solutions to accept!

    - RAID 10 is really only needed for large scale Database or Email servers... which doesn't sound much like what you're looking for. It's a mirror of two striped arrays, so like RAID 1, you still lose 50% storage space to overhead. I highly doubt this should interest you.

    - RAID 5 is really the sweet-spot for home and small business RAID. It requires a minimum of 3 disks for an array but can be more. You only lose storage equal to one disk's capacity to overhead so the more disks you use the less capacity you lose to overhead! For example with three disks you lose 33%, four disks you lose 25%, five disks you lose 20%, etc. It's not as fast as RAID 0 [non fault tolerant striping] or RAID 10 but is faster than RAID 1.

    - RAID 6 has all the benefits of RAID 5 but increases the tolerance of failed disks from one to two, and requires a minimum of 4 disks. The storage capacity lost to overhead is equivalent to the capacity of two disks. For this reason it's really not practical in arrays with less than eight disks. A six disk RAID 6 array loses 33% to overhead, eight disks loses 25%, and ten disks loses 20%, etc.


    Good luck!

    ...I want to access the share from a docker image (as a volume).

    Unfortunately, whatever I try fails. I have tried, among other things, the RemoteMount plugin but I'm not sure this needed.

    @jboesh Hopefully the solution from ryecoaaron will resolve this issue for you... Alternatively: have you considered binding the storage directory from your Docker volume to a shared directory on your NAS? You said you "want to access the share from a docker image", but inversely if your Docker image was bound to a NAS shared folder, the NAS would handle all the CIFS configuration and RemoteMount would connect to it there.

    If I format the drive to EXT4, how problematic is seamless file transfers between Windows and Linux machines? Would installing Ext2Fsd really make it effortless?


    ...Would it be recommended to wipe and format the drive and add with Ext2Fsd all my files to an EXT4 drive? Should I do this with the external drive connected to the RasPi or PC?

    qu4ntumrush there is a middle-man that will take care of most of these issues for you: "CIFS" (Samba). Hosting a drive with NTFS or exFAT on a Linux host is always considered a last-ditch option... EXT4 will likely be the lowest maintenance + minimal complexity format for what sounds like your use case. If ever you had an emergency to connect that EXT4 disk directly to your Windows PC you could do so with Ext2Fsd or other Windows de-stupifying options 😜

    "CIFS" (Samba) will do all the hard work for you once your disk is formatted to an ideal type [i.e. EXT4 as recommended above]. Windows does *not* need the ability to read/write the desired disk format; it only needs to reach the network host, which is protocol CIFS provided by the service Samba on the NAS device. This protocol/service then mediates all the reads/writes to the non-Windows compatible disk format. It'll take all the guess work out for you on the user end -- and not require a guessing game of software compatibility concerns for you from the Redmond rejects!

    I migrated from bare metal to linuxserver/plex a while ago and have had *no complaints!!* Their compose file template is:


    ...but their full guide/instructions can be found at:

    https://hub.docker.com/r/linuxserver/plex


    Definitely set up a "config" folder in a path of your choosing [this is *not* clearly outlined where it otherwise says "/path/to/library:/config"... it should read "/path/to/library/config:/config"], and I recommend a user/group specific to your Plex service. As long as you do that I expect you'll encounter no issues as I likewise have not!

    I'm sorry to ask this, but how can I get my old password? I can bring up my powershell, and type my: SSH xxx@xxxxxxxx, but there it asked for a password. It's the password I don't have? I'm totally lost.

    Dan

    timlab55 your SSH user/pwd doesn't *have to* correspond with your OMV login account: think back to the operating system you installed OMV on (core ISO or Debian with additional repos?) If you had an elevated/sudo user in the core OS [whether or not it matches up with OMV users] try that!