Posts by 1activegeek

    Gotcha, makes sense. As I expected I think anyhow. I've noticed a few wonky things trying to go through the proxy. The main oddity, it seems the header logo for some reason doesn't want to load. Not major. Bigger issue is one of the plugins I use extensively - DockerGUI - problems with container start, stop, restart calls. I'll prob have to do as you did and debug to find where the hard links are and look at possible updates or live with what I can. Thanks for the help, glad I could at least get it functional.


    +1 for Votdev adding the changes. Is the code available on github to submit PR's? Maybe we just put a basic PR in for the changes you've found?

    So I went through the nuances of digging into the nitty gritty detail of the trailing slash you seem to have. It's a mystery to me how many people use such different syntax yet all end up with working configs in the end. But in this scenario, no go. It turns out, my location directive was using <tt>/omv</tt> and for the proxy pass I was using <tt>http://serverIP;</tt>. The lack of simply adding in the trailing slashes was the reason it was not working. I'm not a web/proxy guru, so I'm not sure why in most my other scenarios that never mattered, but it absolutely did here.


    Once I did this, I was able to hit the server - albeit with no login prompt. I assume this is because I hadn't yet looked at modifying the other content you mentioned.


    Do you know are these files core files that may be updated with OMV updates? Or are they more side design/customization files that aren't likely to change? If they can change with any update - I might be hesitant to do this as it means another thing to keep track of.

    Thanks for the input, though I'm not sure I'm seeing how this works. The problem is that OMV web UI doesn't expect /omv in your case in the URL. So I'm getting a 404 not found when I hit it. The other thing is that I'm not running it IN omv using the plugin, I'm actually using a docker container running nginx. So not sure how applicable this is or if there is something different when it's running locally.

    So I've tried a lot of searching but unfortunately everything related to nginx on this site tends to be specifically related to setting up nginx plugin or the configs for gaining access to other plugins/containers running on OMV.


    What I'm looking to do is actually proxy the connection to the OMV Web Gui. I don't plan to actually make this externally accessible, but I'd like to add it to the list of services I can just reach at a single url, with multiple locations: server.local/service1, server.local/service2, etc, etc. Unfortunately, I'm not having any lucky with the basic of:


    Code
    location /service1 {
    proxy_pass http://omv.host.name;
    include /config/nginx/proxy.conf;
    }

    Unfortunately I'm just getting started with NGINX after all this time, and so I'm not quite familiar with all the options available, and how I can work through troubleshooting the connection to see what additional inputs I may need to this to make it function properly.

    Interesting. But is the filesystem unresponsive or unusable during that time of rebalancing? If it isn't available, that would be a huge issue I can imagine because a failure could happen at any time, happens in the midst of doing some large transfer or writing of new content - then this strikes - system is unable to process - I imagine that can't be the case.


    While I don't like the idea of having to take an extended time to wait for a re-balance, I do value the ability to adjust accordingly. Thanks for sharing those numbers, that gives some real world info to compare. Barring any snafu on SATA errors ;)


    The funny part of it all is - I usually try to plan for the most flexibility, and half the time I never end up needing/using it anyway. So maybe I should just bite the bullet and jump back on the ZFS bandwagon or just keep the status quo that's been working (XFS formatted with MergerFS/SnapRAID). I don't have massive critical data, and most of it is replaceable, just the annoyance factor of re-populating and configuring things more than anything.

    Thanks @tkaiser, this helps. OMV 4 has still been "beta" enough that I don't want to make the move yet. I've learned my lesson going bleeding edge on systems that may not be reliant for day to day operations, but provide happy family home services that can go down and I never heard the end of it! ;)


    While I do understand ZFS a bit more, my main reason of considering BTRFS was the ability to dynamically grow/shrink things. My only big issue seen with using ZFS for my uses, is if I end up adding in another disk for growth I can't just resize the pool. Or if I wanted to just take 2 disks out and create a separate storage pool for some other purpose, I can't just rob from one without impact and move to another. Perhaps things have changed - I was on the ZFS train probably 4-5 yrs ago when I was running FreeNAS.

    Sorry to revive an old thread, but I've been doing a lot of looking around at ideas lately on how to design/setup my system and this thread provided value.


    Currently I'm running fairly solidly with 4x4TB disks formatted XFS and using SnapRAID/MergerFS. I'm converting over to Encrypted disks, and will have 4x2TB and 4x4TB disks after the conversion. What I'm trying to understand though, is whether it makes sense to forego XFS and convert into the BTRFS format now. I've been reading and seems it is getting much praise as a more future enabled FS. But on this forum there seems to be a lot of talk of it not being stable enough. Which is surprising as the general consensus I've seen in other places is BTRFS is stable and is the new "hot" thing.


    Anyhow - to the point - what I'm curious @vl1969 - do you see BTRFS as being stable enough to move to as my core FS? You mentioned it here I think and it sounds to have been working pretty solidly. If so, do I still need to drop down to the command line to truly create the BTRFS file system with multiple disks? What I've seen in current form is that if I just format the disk for BTRFS and mount the multiple disks (testing with the 4x2TB now) - dropping to CLI and running the btrfs filesystem show I only end up with an output of each disk being it's own and no RAID1 even it would appear. With the amount of disk I have, should there be an issue trying to run it RAID10?


    Last question around this - what are the base options included by OMV? You mentioned I think bit rot and "cow"? Any others? I'd like to do it all through the GUI so I make sure I don't screw anything up. And I believe doing it this way I avoid the issue you spoke of where that bug can appear when laying it on top of the raw disk vs the partitioned disk. Forgive me if some of this is more in the realm of understanding BTRFS. Feel free to tell me to go RTFM! :)

    I just don't get it... Qbittorent never needs me to open ports, Deluge doesn't need it either. One is running in a docker the other is not. rtorrent the port is ALWAYS closed.This means I get horrible throughput.

    Based on this not being related to the topic of the Docker GUI plugin, I'd suggest opening a separate thread or even opening an issue with the Docker container creator/maintainer.


    On the "never needs me to open ports" - It's not necessarily an issue of opening ports in Docker. If you can get connections and downloads to work at all, then it's the first issue of firewall/router ports. If the ports weren't open or functioning inside the docker container, you would get 0 connection. I'd suggest that perhaps the other apps are using a default port that is the same and already opened on your firewall/router, or are using UPnP to open the ports and the other is not. Lastly, make sure if you are "testing" that you are using the same exact torrent file in multiple apps to compare "horrible throughput". Every torrent is different and can't be compared independently or you will no-doubtedly see great variations in perceived performance.

    I don’t recall if there is somewhere inside the container to check. I know the logs from OMV won’t tell you anything I don’t believe. I would just suggest setting up a random conainer and stopping it. Then let it do its thing and check the next day. If that stopped container is gone, it’s running. If it’s still there then you may have an issue. Alternatively if you have a container you know has an update to the image, monitor the age of that container the next morning.

    By the way, did I understand the setup correctly, that this will run every night at 05.00 (24h format) or 5 AM?

    From what I can see in your image - yes, looks like it's running that way. * = every, and you have stars for weekly, monthly, days, etc.

    That is one hour after my Watchtower runs every night.

    Yup - same way I have mine running. Watchtower runs at 2AM, cleanup at 3AM. Only difference, I run it once a week. Daily is a bit overkill in most cases unless you're actively developing or working on an under development container.

    Thank you for this! And sorry for the stupid questions, I see now how stupid I sound!

    Not a problem. Not stupid question - someone else I'm sure didn't understand but didn't ask. It's more an education game. If you're newer to docker, there's definitely an uphill learning curve at first. I went through it once upon a time. My biggest suggestion, take down the link for the Docker reference links. It's extremely helpful to reference their "man" pages for the docker commands. Often times I've found most of my education comes from their documentation and a little tinkering locally.

    Sorry it seems there might have been some confusion. I'll elaborate:


    • Using the --rm=true feature, is a PER container flag that you would use to ensure that if for some reason you delete the container, the leftover STUFF from it doesn't linger (this is what @subzero79 was highlighting) - imagine starting a new container everyday, and having not cleaned up the old one. It happens more often than you think, and can really build up after awhile. Specifically the pesky part is the Volumes. Most people don't fully understand volumes when they learn about docker and how they linger unless you specifically try to remove them and clean them up. This helps to reduce that by removing the remnant files immediately upon stop of the container.

      • The --rm flag tells docker that as soon as this container is stopped, remove/delete it and it's volumes. Hence the idea this will ensure you don't build up leftover "hanging" file likes volumes, containers, etc.
      • The --restart tells docker that if the container is stopped for some reason (restart of host, interrupted service, etc) - it should automatically restart the container. This implies that you may actually stop the container, but want to leave it intact.
      • Your issue stems from trying to use two flags that contradict each other in their purpose. --restart says you want to be able to stop and restart a container, while --rm says you want a container to be removed/deleted as soon as it is stopped.
    • Using the Watchtower container takes care of re-building containers automatically when updates come about - so in my mind there is no need to really run this container with the --rm flag as you often don't get rid of this container anyway
    • I do use a script that cleans up all my containers, negating the need for the --rm=true flag. In my opinion, this is the better way to go as it will allow you to still prune and clean up, but not require special flags on the containers either. I thought it had been posted in this thread, but I think I posted about it somewhere else. For posterity, here is the info:

      • docker system prune -f - set this up under scheduled tasks. I run mine weekly at 3AM on Mondays. I wake up to an email from OMV on Mon morning with a report of the output by setting the Send Email flag on the task. Docker just introduced this new command not long ago. Previously the script consisted of about 3-4 lines of commands to do the same thing that Prune does now automatically.
      • I use a bunch of the LS.io containers, they mostly update on a weekly basis. So of my 12-15 or so containers (8 prob LS.io), I end up with about 2GB worth of data cleaned up every week.

    I stopped trying with it as I temporarily got around it as indicated (not so secure manner). The larger issue was around mono libraries and the plugin not using newer libraries I believe. For me, I ended up totally dumping it anyhow since Amazon no longer allows use of Cloud Drive for unlimited storage at a fixed cost. So I'm still on the hunt for a new alternative where I can make the most economical decision - thus I'm not using Duplicati at this point.

    Well, not sure what happened here, but I'm back running again. I don't know how, but I think the UUID got modified for the disks perhaps? I didn't have a full copy of the previous and new UUID I don't believe. I thought I matched them up and all worked. So I then attempted fully destroying the current filesystems/disks and re-doing it all. This time, the UUID seemed to match up correctly and it started with no issue. Much faster I might add too. So we'll see if it stays steady, but for now it seems I'm ok. Bad news, I don't really have true solid root cause analysis.

    Hmmm ... interesting. Not good news in this case since this is my prod machine. =O I was thinking those labels looked funky with all the x2d all over. Interestingly though, the other disks that show the same oddities, are mounting and working fine upon boot without any issue.


    Where would I attempt to change the UUID manually? I'll have to make some backups and test it out. Just a pain testing as the machine takes a bit to boot from everything going on, and of course downs my apps. I noticed this after recently adding a new disk, copying data, removing bad disk. Not sure if it was the case before that. So perhaps the new disk introduced was causing something wacky as I had to re-order my BIOS start order too. I'll have to dig in there as well to make sure nothing got reassigned funky.

    So I finally got around to digging through the log. It has a lot going on that I'm not sure I can pick anything good/bad out of it. What I did find that looks like it's problematic though is the following lines:
    Sep 10 14:46:21 atlantis systemd[1]: Job dev-disk-by\x2dlabel-disk8.device/start timed out.
    Sep 10 14:46:21 atlantis systemd[1]: Timed out waiting for device dev-disk-by\x2dlabel-disk8.device.
    Sep 10 14:46:21 atlantis systemd[1]: Dependency failed for /srv/dev-disk-by-label-disk8.
    Sep 10 14:46:21 atlantis systemd[1]: Dependency failed for File System Check on /dev/disk/by-label/disk8.


    Which looks like something timed out while trying to load. Which my thought would be ok something is missing or not mounting, but I'm not sure what. I can't seem to find anything relative to these device calls any earlier in the journal readout. I'm hesitant to post the whole output as it seems to have some semi-sensitive detail. It appears shortly before this that sda1 thru sdd1 (disks 1-4 which are not encrypted) mount successfully. And slightly before that sdj1 (the USB drive with the keys) is mounted properly as well.



    Any direction that can be headed to look for what might be going wrong or causing my delay/timeout?

    Has something changed recently that would cause this to fail auto mounting? Or could someone point me to logs that could help identify why the disks are no longer mounting automatically at startup? I've had to recently replace a disk, and in the process when I had to restart the server, I noticed the Encrypted filesystems are no longer auto mounting.


    Any ideas or help?

    Aha! Ok, no worries. I thought it was something inherent to the plugin that would actually do it for us automatically. That's why I thought perhaps I was missing something as I didn't see it. Good to know. I can wait for that at this point. No major updates that I should need from there. Thanks for the hard work, and for taking the time to explain the update process when it comes along! :thumbsup:

    Ok, so the update to the .9 version is complete. Everything looks good. Confirming the following:

    • Plex issue with parsing of the URL in the container default Env Var is not getting chopped off (woohoo!!)
    • Icons are now visible like normal icons :)
    • Dashboard service monitor is functioning
    • Nice search option in the top!

    Now the one thing I'm left wondering about ... how does one get upgraded to .10 now? I tried manually updating repos, and checking for updates, but the .10 option is available. Do I need to manually do something to handle this? I know in my OMV Extras section I do have the Docker repo enabled, do I need to disable this now that I'm on .9? I thought the discussions mentioned that there would be an automatic transition to the new binary from the new repo?