Beiträge von thenorman138

    Thank you for this, I didn't realize how much UnionFS was shipped via Merger so that makes sense. I hadn't noticed any true difference in the UI other than the name.


    As big a deal as I made about making sure I used "Most Free Space" in my first setup, I somehow allowed it to be "Existing Path - Most Free Space" this time so I totally screwed up there. Thanks for pointing that out, I made the change and rebooted the system and it seems to be working now. I do still have a 15GB threshold on the drives so that it won't get too full, however, I do like writing to the pool itself and my containers are set up for that so I want to keep all of that as streamlined as possible.


    I think this will resolve the issues though, thanks!

    Hope this is the right section to post this....


    My old server build with OMV 5 seemed to go off without a hitch but I've recently decided to migrate (most of) the data from it to a new custom build as my daily driver and use it as a backup. In doing so, I've started fresh with the build by using OMV 6 which immediately brought about the fact that I need to use mergerfs as opposed to unionfs (not a huge difference, honestly). In setting all of this up I tried to replicate my old setup as best I could. I have 3 10TB drives setup for data and 1 10TB drive for parity. I've been using Rsync for the last few days to move just my media volume over.


    It's gone fine but today It hit about 10TB and it seems like it hasn't broken up the data and I"m getting outofspace errors now. I'm Rsync-ing over to the mergerfs pool on the new server, which is 27TB usable. IF I SSH into the server and run ```df -H``` it shows the pool as 10TB used and 20TB available, but the other two data drives have hardly any data on them. It seems like it ran out of Parity space and I don't know what to do know. All 4 drives are the same size so why is it now stalling on this rsync and saying it's out of space?


    If I run a snapraid sync it says it can't complete because it's out of space as well.


    What should I do here?

    Good points, I've done a lot more testing this morning just to see what is affected by pools (doing scp on files in and out of the pool just to see) and scp definitely seems to have faster transfers overall. I've moved to a new rsync command:

    Code
    rsync -hazP --stats -e "ssh -T -c aes256-gcm@openssh.com -o Compression=no -x" root@10.10.10.15:/srv/27829c9c-dbc1-4408-a111-56dbcd8f0ec0/media/ /srv/mergerfs/norman_pool2/media

    and I haven't seen much difference yet, but I'm also seeing cpu and memory on both servers are still wide open

    I built a new OMV server to take over as my daily driver and convert the existing into my backup, so my current step is moving all the main data from the old unit into the new one. In doing so, I wanted to test out the new 10Gb NICs that I installed in each, and I'll use this for all future backups and moves.


    Iperf3 showed speeds of about 9.2Gb/s which I was happy about. However, when doing the main rsync command and moving a few TB of data from the old system into my new Mergerfs pool, I noticed that speeds on small files were anywhere from 3 to 7 MB/s and larger files roughly 15MB/s. This is obviously considerably slow as I expected no lower than 100MB/s.


    My mergerfspool is built off of 4 seagate SAS drives, 10TB each (3 data and one parity) and I'm using Snapraid on top of MFS. The drives are enterprise SAS at 7200K RPM. The servers are running (source) dual xeon L5640 CPU with 32 GB ram and (destination) Intel i3 12100 with 32GB ram. During the rsync, neither CPU was moving much, both below 7% usage and ram was around 20% for each, so nothing seemed to struggle at all



    My command running through SSH:

    Code
    rsync -harvzP --stats root@10.10.10.15:/srv/27829c9c-dbc1-4408-a111-56dbcd8f0ec0/media/ /srv/27829c9c-dbc1-4408-a111-56dbcd8f0ec0/media/


    Am I missing something glaringly obvious? Basically, what I'm trying to say in my command (which I'm running from the destination server in a tmux session) is connect to the source server at 10.10.10.15 which is the IP of the 10Gb NIC, not the server, and get the 'media' folder from my unionfs pool, then copy that folder's contents to the 'media' folder in this server's mergerfs pool.


    I ran this through the Rsync module in the OMV web gui as a task instead just to see the difference and I got closer to 50MB/s but still seems low, so the native module in the GUI is running at least a little faster than the SSH version


    Am I stuck with these speeds or is there a better way to do this?

    I recently installed nextcloud on my OMV server and in the process I changed my web gui port for OMV to 81. I've done this long ago on my other OMV server with no issues.

    However, today I realized I couldn't access the web gui at all. I couldn't even change the port in omv-firstaid as part of it would fail. I then tried to run service nginx start in the shell to see what was going on and got this:


    ```Job for nginx.service failed because the control process exited with error code.

    See "systemctl status nginx.service" and "journalctl -xe" for details.

    root@blacklagoon:~# systemctl status nginx.service

    nginx.service - A high performance web server and a reverse proxy server

    Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)

    Active: failed (Result: exit-code) since Mon 2020-12-14 11:11:18 EST; 43s ago

    Docs: man:nginx(8)

    Process: 7954 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)

    Dec 14 11:11:18 blacklagoon.local systemd[1]: Starting A high performance web server and a reverse proxy server...

    Dec 14 11:11:18 blacklagoon.local nginx[7954]: nginx: [emerg] "server" directive is not allowed here in /etc/nginx/nginx.conf:67

    Dec 14 11:11:18 blacklagoon.local nginx[7954]: nginx: configuration file /etc/nginx/nginx.conf test failed

    Dec 14 11:11:18 blacklagoon.local systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE

    Dec 14 11:11:18 blacklagoon.local systemd[1]: nginx.service: Failed with result 'exit-code'.

    Dec 14 11:11:18 blacklagoon.local systemd[1]: Failed to start A high performance web server and a reverse proxy server.```


    Has anyone run into this issue? I killed the containers for nextcloud/letsencrypt and removed the port forward from my router but still no help. I can access all portainer and container web GUIs just not the main OMV gui

    Thank you so much for your feedback! I was under the impression that CP abandoned their personal version and went full-bore into the professional version which is what I'm using. I may be wrong though. I'm thinking of just continuing to use rClone and my gsuite cloud storage at this point

    Hello all,

    I've got my server running OMV5 and I use it primarily as a NAS and media server along with backing up all of our home's devices onto it. That gives me a backup locally, but I also backup to another server here at home and on the cloud. The issue is that I've got crashplan and can't figure out how to get it setup properly on OMV


    I've set up everything to this point using docker and I've got the docker-compose file for jlesage/crashplan but I can't figure out what to do with the volumes.


    Has anyone been able to do this?

    Hello! I'm on day two of using my new OMV build and I've got most of it configured but I still have one huge issue. I have two empty/new drives mounted and ready to go. However, I have a partially full 3TB drive from my windows server build which is NTFS. I want to move the data from this drive to my freshly mounted 3TB drive so that I can then wipe/format this drive and make it parity in snapraid.


    When I run fdisk -l it does show up (as well as in the physical disks area of OMV web interface). However, in file systems it doesn't show there. I can hit 'create' and see it in the devices list but this would wipe it.


    How can I mount the existing drive with data so that I can use midnight commander to move the movies off of it and onto the other drive?