Posts by kcallis

    SnapRaid and mergerfs are different things and do not interoperate with each other. So if you aren't using mergerfs you don't need to pay attention to suggestions about its use.


    Some programs that can be run in dockers have /config container side directories that will cause a lot of grief for SnapRaid unless they are excluded. One example is Plex's /config. Another is Smokeping. And there are many more. Rather than discover these one by one and exclude them one by one, I found it easier to put these all in one folder, confined to one drive, and exclude that folder. This leaves them all unprotected by SnapRaid, but I can live with that.

    So for instance, my appdata currently resides in /srv/dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/appdata. I need to exclude that folder in /etc/snapraid.conf and also directly reference the full path when I am configuring a container (ie: no symlinks).

    I have tried to use a symlink, but with mixed results. Sometime, I can (for instance) use as my host link, /srv/appdata/foo (which is symlinked to/srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/appdata/) --> /config and it works just fine. On the other hand, on some containers, I have to do the full path /srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/appdata/bar --> /config or even /srv/dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/appdata/bar --> /config. Each container acts a little different!


    Interestingly enough, I have never created a mergerfs folder. When looking at the tutorials on using snapraid, I never saw an example which use mergerfs. I did read about exempting directories in the /etc/snapraid.conf file (which I didn't know about) to protect the appdata folder.

    So I have removed all of my shared folders because I wanted to simplify the structure. I am still perplexed by the action of using the path. For instance (also in the above posting), I have a directory in the pool called appdata (/srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/appdata), appdata is actually located at /srv/dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/appdata.


    When I am spinning up a container, I try to use the following:


    Code
    /srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/appdata -> /config

    I start the container, but even though it says that it running, I am not able to access the container. If I change the container with the following:

    Code
    /srv/dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/appdata -> /config

    re-deploy the container, I am able to access the container and life is wonderful.


    I have checked the permissions and everything is go, so I wondering why I can't use the shorter name?

    Normally ZFS is my go to, considering that I am just using OMV strictly for media purposes, it is not important to have ZFS in play. On the other hand, snapraid is another issue. I know that somewhere down the line, snapraid will be working in the future, right now it is on the pipeline. So that is a major reason to stay with OMV 5 for the time being.

    How to do I get rid of shared folders under snapraid? I removed all of the directories from the cli, but when I look at Shared Folder, it still show the folders and I am not able to remove them. I thought that shutting down all of the containers would release the folders (or at least the pointers, since the directories are gone), but no dice. I have turned off all file system controls, like CIFS, NFS, etc. and still can't delete the shared folders.

    I believe that I am ready to re-install once again, now that I have a better understanding of things. I am running under Proxmox and OMV 5 has worked pretty well. I had initially upgraded to OMV 6 (and the performance was not bad at all), but degraded because I felt that I needed the proxmox kernel (because I wanted to make use of ZFS). After downgrading, I realized that I because of the drive enclosure that I was using (a USB3/eSATA enclosure), I found that I was not going to be able to use ZFS. Nevertheless, using snapraid I was able to make use of my enclosure and life was good with setting up docker.


    So I decided that I wanted to blow away my currently layout and start anew. Since there hasn't been any real issue with using OMV 6 (or so I have read), is there any pitfalls if I opt to upgrade to OMV 6? I know that I will not be able to Proxmox kernel, but considering that I am not making use of ZFS, am I losing any performance using the vanilla kernel? I figure I could start setting up OMV 6 so that when then time moves from Alpha/Beta version to mainstream I am ready for the roll out?


    Any thoughts about that or should I say with OMV 5 and wait?

    It has been a couple of weeks, but I have come on another issue. Or maybe it is not an issue and I am somewhat confused. I make use of Portainer to handle all of my docker containers (most of the time... Sometimes I need to just start one up on the command line).


    Code
    root@nas-01:/srv# ls -l
    total 40
    drwxr-xr-x 4 root root 4096 Oct 2 23:37 1abd74ac-84b5-4f06-a458-d5d87ecd6e1e
    drwxr-xr-x 5 root root 4096 Oct 3 08:57 dev-disk-by-label-media
    drwxr-xr-x 10 root root 4096 Oct 6 12:55 dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19
    drwxr-xr-x 4 root root 4096 Oct 2 23:37 dev-disk-by-uuid-2e164b03-9001-464b-bdb7-fef2a0b05ff1
    drwxr-xr-x 3 root root 4096 Sep 10 00:50 dev-disk-by-uuid-a3f78985-a6ea-43b3-8753-895c0e249b15
    drwxr-xr-x 8 root root 4096 Oct 2 23:37 dev-disk-by-uuid-cbf8c644-5871-4932-ab87-382b311cb786
    drwxr-xr-x 5 root root 4096 Oct 2 23:37 dev-disk-by-uuid-d02056de-d1d8-4046-bb52-2884b2847bfb

    When I try to for instance, bind (host) /srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/Configs to (container) /config, I tend to have some issue. On the other hand, if I bind /dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/Configs to /config, life is groovy. Am I suppose to use the latter method or did make a mistake in my configuration of snapraid? I would think that I should have to bind from /srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e and not be concerned with where the directory is located on the drives. I find myself having to constantly ssh into the host to get the correct directory to Configs (ie /srv/dev-disk-by-uuid-1d060b23-6aec-4855-85e7-c0cb380c0d19/Configs) as opposed to /srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/Configs.


    Of course, this could just be my misinterpretation and I am not understanding correctly. Any pointers would be greatly appreciated!

    Thanks for the pointers to the Extras files. I am wondering if my use of trying to make use of the pool directory might be causing the issues.


    In attempting to create a docker storage, I created a directory in pool-01. I am thinking that maybe I should just create a directory in one of the uid devices, because when synced, the docker storage will be placed in the pool-01. Just a thought...

    I removed docker from the GUI, and checked to see if docker.service was running, which it wasn't. Where is the script to install docker from the GUI? Maybe I can use shellcheck and see how the installation script works and see where it fails..

    So I don't need to create a shared folder? Actually, I did follow up on your instructions and still no joy! I tried to copy the error message, but was not able to do so... But there is still an issue with install docker-ce.

    What are the filesystem permissions and ownership of that directory (not what you see or set for the shared folder in OMV).


    Also, you may run into problems with SnapRaid throwing a warning flood at you if you run a sync without excluding some, most, or possibly all of that shared folder.

    The permissions is:


    drwxrwsrwx 3 root users 4096 Sep 9 19:55 Docker


    I changed the permission to Admistrator (RWX), Users (RWX) and Other (RWX) and still no joy. Of course, I am able to install at /var/lib/docker, and I thought maybe I could do a ln from /var/lib.docker/containers to a shared folder, and create a shared folder for configs, but not ready to try that one.


    I am just started trying my hand with SnapRAID after I found out that I could not make use of zfs. I have not tried to do a snapraid sync, because although I have created the shared folder, as of this moment I have not added any content this time. So could you explain how to exclude the shared folder (or shared folders).

    I am running OMV 5 under proxmox. I have attempted to install docker under omv-extra, but when I attempt to change docker storage from "/var/lib/docker" to "/srv/1abd74ac-84b5-4f06-a458-d5d87ecd6e1e/Docker", docker-ce fails to install. I am trying to move docker storage to my SnapRAID pool. I have the permissions setup with Administrator (RW), Users (RW) and Others (RO) for the Docker shared folder. I always seem to screw up permissions and I am hoping this is just a problem with permissions.

    ZFS need SATA drives (not USB) drives to create pools.

    Maybe I needed to be clearer. The connection is a USB3 connection to a Mediasonoc enclosure which houses four 4TB SATA drives. I had no problem with the proxmox host seeing the 4 drives and making it into a raidz-1 pool, but the drives are seen my OMV, and I can even create a filesystem on each drive, but I can not create a zpool (the GUI doesn't even see the drives when I attempt create a zpool.).

    I setup a pass thru of my 4 USB drives in a enclosure. OMV sees the 4 drives, which I have quick wiped. The problem that I seem to have is that when I try to create a zpool, zfs is not seeing any of my drives. I also have a usb 3 4th drive also plugged into the machine and I am able to create a ext4 filesystem on the drive and no problems. Am I missing something? I have tried using the gui as well as cli and no joy.;(

    I used to use OMV 4 as a VM and before docker became useful on Proxmox, but just as a getting to know the OMV but with no real goals. Recently, I decided to fire up my proxmox server and created a Ubuntu VM to I could use as a docker platform. I decided that I wanted to move from using Ubuntu to OVM 6. I have setup a 32TB zpool under Proxmox and was wondering if I install OVM as a guest, is there a way to allow OVM access to the zpool as native storage for OMV and can I manage the zpool or would I have to for instance, ssh into proxmox to create a new dataset, and then at that point OMV can make use of the zpool?

    I attempted to spin up OMV 5 under Proxmox 6. When I defined the KVM VM, I only created a 10G drive space to act as the boot drive. After starting the VM, I attempted to update the system from web interface, which failed. I then ssh'd into the VM and tried a apt upgrade which failed. I thought that here was some issue with my network connection, but pinging external hosts had no problems.


    I took a look at the / filesystem, and realized the filesystem was "ro". I had noticed when I was installing, there was failure to connect to any of the repositories, but I wrongly assumed there was some problem so I skipped installing a repository mirror. I now realize that there was some issue going on and I didn't notice it. I set up the VM to use one of my VLANs, which does get it address from the VLAN dhcp server, so I am at a loss.


    I attempted to re-install, but once again, there is a failing when "configuring the package manager". There would seem to be a network problem at the point. Needless to say, that when I skip the section on package manager, the VM does finish, but at this point, the drive is in ro mode.

    Any pointers would be greatly appreciated!