Posts by StanleyAccrington

    Your right of course. I did type "docker image pull linuxserver/mariadb:latest". Tried portainer pull image file without tag and it still failed. I log in to Putty as user "pi", to OMV as user "admin" and to Portainer as user "admin". I don't know if this is relevant.

    I am running a headless raspberry pi from my Windows PC. The DNS changes I made are to the network adapter on the PC. It occurs to me that the DNS changes should be made on the ethernet connector on the pi or on the router.

    I had Obtain DNS server address automatically configured on my wi-fi adapter. I changed it to Open DNS: preferred 208.67.222.222 and alternate 208.67.220.220, then flushed the DNS cache. Still got same problem.

    My understanding is that the directory /srv/dev-disk-by-label/boot is linked to the symlink /dev/disk/by-label/boot in the <mntent> entry of the file OMV config.xml The mount point is specified in the file fstab There are two entries for the the boot partition in fstab One (under >>> openmediavault) gives the mount point as /srv/dev-disk-by-label-boot, and the other gives /boot Which one does OMV take as the mount point?

    Issued command sudo omv-firstaid and selected network interface eth0, configure IPv6 for that interface with stateful address, no WOL.

    I got

    Error #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color hosts 2>&1' with exit code '1': raspberrypi: Data failed to compile:
    ---------- Rendering SLS 'base:omv.deploy.hosts.10default' failed: Jinja error: hostname: The value 'raspberrypi.local' is not a valid hostname.


    Did I do anything wrong?

    On raspberry pi 4, I moved the root filesystem from the partition on the sd card to the hard disk (changing the PARTuuId in the boot command and in /fstab). I left the boot partition on the sd card and deleted the root filesystem partition.

    When I booted everything was fine and I was able to browse the OMV gui.

    Since I had two sd cards, I decided to do the same on the second one (I couldn't clone from the first one since it was larger than the second).

    However when I tried to boot from this second card, although everything else was fine, I could not browse and got error 'nginx failed 97 address family not supported by protocol'.

    As far as I can see the boot partition contains exactly the same files on both sd cards, and they obviously use the same filesystem on the hard disk.


    I really can't understand why the two cards behave differently.

    I created a mariadb docker container on raspberrypi. Automatically a docker volume for the database data was created. This volume resides in /var/lib/docker/volumes/

    Since filesystem directory root is located on a partition of the sd card, the database is also located on the sd card.


    This presents two problems. One, since the db may get very large it may run out of space. Two, since there will be a lot of writes to the db, the sd card might wear out.


    Should I move filesystem root from the sd card to the large hard disk?

    I think the answer is the filesytem table /etc/fstab


    This has been edited by OMV and ties my fs identifier /dev/disk/by-label/Data to the mount point for that file /srv/dev-disk-by-label-Data .

    So when the fs is mounted, the system knows which fs it is, and the fs is identified via symlink to /dev/sda1

    I understand that if I label my filesystem Data, then, on boot, the physical disk is identified as dev/sda the partition on it is sda1 and a symlink to /dev/sda1 is created in /dev/disk/by-label/Data

    Thus persistent block device naming.

    Then when the fs is mounted a directory /srv is created and the mount point is /srv/dev-disk-by-label-Data/


    I don't understand how this mount point is linked to /dev/disk/by-label/Data which would then be linked to /dev/sda1

    I have created a docker container on piwigo image. If I switch into the container using docker exec -it bash, I can see the container's filesystem. It seems to me that this is an overlay of the host's filesystem so that in the container directories /config/www/gallery/upload/xxxx corresponds to /var/lib/docker/volumes/piwigo_data/_data/www/gallery/upload/xxxx in the host directory. (This is the docker volume piwigo-data_data created before I created the container)

    Here xxxx corresponds to the photos I have downloaded into piwigo.


    Am I correct?

    Thank you, I am beginning to understand.


    Doesn't the fact that you can point a shared folder at an existing folder on the filesystem mean you must know what the exiting filesystem is called? And, since it isn't yet a shared folder, you can't see it from an SMB client. So you need another mechanism to view the filesystem which doesn't involve shares. And this other mechanism could just as easily be used to create the fs structure you require.


    Of course shares must be used for the basic transfer of data between client and server. I'm just exploring why shares are needed for constructing the fs hierarchy.

    As you can see, I am a beginner, so I am just trying to understand.

    It seems that OMV does create folders via the 'add shared folders'. These are folders as children of /srv/dev-disk-by-label-Data/

    They appear in the linux directory hierarchy whether they use SMB or not. The only exception is if you use the sharerootfs plugin. Perhaps fs root is not a directory.

    However you advise not to use this plugin which, I think, means there should be some folders hidden from Windows explorer (for example).

    I don't believe that OMV is just an NAS (a storage for data to be moved in and out by a client).