Posts by MarcS

    thanks for the link but this defeats this process seems to automatically unlock all drives after reboot.

    That would defeat the whole purpose of drive encryption.


    In my solution, there is a manual step, i.e. running of a shell script as root.

    So I solved the problem. The issue was some confusion with the mount-points.

    I have several LUKS encrypted disks which I decrypt after booting via a script but apparently that now doesn't work anymore after the OMV update. I have to manually decrypt the disks each by hand in the OMV GUI.

    After doing that, the Docker path is visible again and docker works.

    yes my docker storage path points to a disk that is now (surprisingly) empty. Something must have happened during the OMV upgrade...one disk is completely empty and that was the disk where my Docker path is pointing to.

    Seems to be the same problem. I dont even want to re-install docker as I have so many containers up and running which took me weeks to configure. I fear they might all be gone if I re-install Docker.


    Is there any one that can help on this Docker issue. Seems to be cause be a recent OMV update.

    many thanks

    I had Docker running nicely with Portainer and several live containers (Nginx, Nextcloud, Pihole).

    After a recent OMV update I cannot get Docker up again. I am posting the output below. Maybe someone can help interpret this.


    Journalctl -xe output:



    and the output of systemctl status...


    thanks - currently I see the below. Do I just type the data in clear text? How do I then restart the OMV engine so it takes effect?


    <mount>

    <uuid>d769a175-28d3-4046-9a37-eb31be79a77a</uuid>

    <name>nfs2</name>

    <mntentref>8e50a97c-1c2f-4b92-b417-dfca599360f0</mntentref>

    <mounttype>nfs</mounttype>

    <server>[IP]</server>

    <sharename>[SHARENAME]</sharename>

    <nfs4>0</nfs4>

    <username></username>

    <password></password>

    <options>rsize=8192,wsize=8192,timeo=14,intr,nofail</options>

    </mount>

    Since today's update of OMV and Plugins, my Union Filesystem setup is not working anymore.

    When trying to access the Pool, the error is


    Code
    cd: '[Poolname]' : Transport endpoint is not connected.

    My fstab looks like this:


    I had a remote mount to a NFS share on another server setup and it worked well.

    Now the other server failed and had to be re-built but now my OMV Remote Mount is failing. The Plugin screen just doesn't stop "Loading" until the web interface fails.


    Where are username/password of the remote mount stored so I can amend them?

    Will LUKS need the keyfiles after unlocking?

    Scenario: I am running several HDs encrypted with LUKS. The OMV Plugin works well but any reboot results in a laborious manual unlocking exercise. So I decided to create a shell script to unlock all drives via keyfiles. Now it would be best to put the keyfiles onto a USB stick and only insert that during a reboot and remove once the server is running.

    This of course only works if LUKS/OMV don't need the keyfiles after unlocking.

    Is this so?

    thank you. Yes, for Laptop. I have a MAcbook running Linux and a Lenovo X230. Not sure if those take the intel card.

    I think the Lenovo needs an adaper to fit the card and the Macbook does not support it but I could be wrong.

    Has anyone tried it?

    I don't know that there are any. Snapraid doesn't make a difference. Auto-unlock of LUKS might help. Maybe someday I will re-write the plugin to use systemd mount files but I don't want to do that and have it not work any better. LUKS needs to go away as an OMV option (I know LUKS works well - I use it at work a lot) in favor of a filesystem that offer encryption in one layer to avoid this double layer problem.

    Doesn’t ZFS offer data at rest encryption now?

    Sorry guys I have a silly question : this card has 2 internal SATA connectors. Is it still possible with the right cables to connect SAS drives internally with their full performance or does the controller only like SATA internally?

    Hello- I have a mount problem.

    One of my HDs disconnected temporarily. I fixed the cable and did an BTRFS check. The filesystem is OK (no errors) but OMV wont mount it.

    OMV can see the disk and I see the entry under "OMV-filesystems" but the mount command just times out. Also from CLI the mount just hangs.

    Can anyone help?


    This is my FSTAB entry. The disk is LUKS encrypted and the unlock worked fine/

    Code
    /dev/disk/by-label/EncExt3600GB /srv/dev-disk-by-label-EncExt3600GB btrfs defaults,nofail 0 2

    Yes, I am using a docker container for NC, mainly for document sharing and the chat app "talk. All of that is extremely slow.

    Also maybe you could see the error/access/php/ logs and see if there's mention to where/what might be the bottleneck.

    What comes to mind immediately is "server reached pm.max_children"

    And have you tried to see the output of this: https://scan.nextcloud.com/

    The scan output shows A+, so no issues.

    I cannot find error/access/logs . Do you know where this is located?