Beiträge von ozboss

    So I had automatic decryption of my drives working with a normal OMV-install.

    I was trying to setup btrfs snapshots and for this reason installed Debian and then OMV.


    Everything is working fine except for my crypttab.

    During initramfs it says: "cryptsetup: Waiting for encrypted source device UUID=...."

    Calling `blkid` (inside initramfs after the timeout) shows that no block device (other then my root device) is available and therefore cannot be decrypted ....


    I run out of ideas.

    I also installed to a ext4 partition and I have the same problem, so its not btrfs causing the issue.

    Before updating the initramfs with crypttab the drives show up fine once booted and I am able to decrypt and mount them inside the WebUI.

    Something is missing in the initramfs which is causing the drives not to load during this stage...

    I see, message_size_limit can be customized via the environment variable OMV_POSTFIX_MAIN_MAILBOX_SIZE_LIMIT. Please check https://openmediavault.readthe…l#environmental-variables how to do that.

    This seems like a clean solution that I am happy to use. Thank you.

    Maybe rename the variable to OMV_POSTFIX_MAIN_MESSAGE_SIZE_LIMIT?

    I definitely have to start using these variables more often.


    auanasgheps yes I would be interested to see an alternative Snapraid script.

    Why so complicated? Simply use

    Code
    custom_postfix_size_nolimit:
      file.append:
        - name: /etc/postfix/main.cf
        - text: |
            mailbox_size_limit: 0
            message_size_limit: 0


    That's what I tried at first but it will result in:


    Code
    ...
    alias_maps = hash:/etc/aliases
    transport_maps = hash:/etc/postfix/transport
    message_size_limit = 10240000
    mailbox_size_limit = 0
    message_size_limit = 0


    So "message_size_limit" is defined twice.

    I don't know if this is an issue, but anyway it does not look good :)

    That's why I tried to do it properly.


    What does work though is:



    Then again 'mailbox_size_limit' is probably not necessary since I won't be receiving mails and this is probably a question better asked in some SaltStack forum :D

    So I had a lot of files change and the snapraid diff job was not able to send the report mail because it was too large:


    Code
    postdrop: warning: uid=0: File too large
    send-mail: fatal: root(0): message file too big
    Can't send mail: sendmail process failed with error code 75

    So I tried to change /etc/postfix/main.cf by creating the file /srv/salt/omv/deploy/postfix/99custom.sls :


    Code
    custom_postfix_size_nolimit:
      file.keyvalue:
        - name: /etc/postfix/main.cf
        - key_values:
            mailbox_size_limit: 0
            message_size_limit: 0
        - separator: ' = '
        - uncomment: '# '
        - append_if_not_found: True

    Running omv-salt deploy run postfix returns:


    Code
    ID: custom_postfix_size_nolimit
    Function: file.keyvalue
    Name: /etc/postfix/main.cf
    Result: False
    Comment: file.keyvalue key and value not supplied and key_values empty
    Started: 19:57:21.263072
    Duration: 0.36 ms
    Changes:

    I am pretty sure the formatting on the key_values dictionary is correct (reference).

    Any ideas on what I did wrong or how I can appropriately adjust the size limit for postfix?

    So if after implementing disk unlock before boot the array mounts automatically as expected.

    I am still interested in knowing why the array (after unlocking the drives in the GUI) was mounting fine manually but not automatically.

    I created a BTRFS raid1 from two encrypted SSDs like so:

    mkfs.btrfs -d raid1 /dev/mapper/sda-crypt /dev/mapper/sdb-crypt -f

    btrfs filesystem label /dev/mapper/sda-crypt SSD


    The problem:

    After a reboot and after unlocking the drives the array does not automatically mount.


    OMV tries to mount the array every 30 seconds which results in the following errror:

    monit[1395]: 'mountpoint_srv_dev-disk-by-label-SSD' status failed (1) -- /srv/dev-disk-by-label-SSD is not a mountpoint


    The array mounts fine within the GUI and also via CLI with mount -a.


    Does anybody know why it is failing to automount?

    So use another script. There are several available that have the capability you seek.

    It is in the script, just not in the plugin.

    Of course I can change it in the script manually, but then I don't know if it will survive the next update to the plugin.

    The cleanest way would be to set it in the snapraid-diff.conf but here it states:

    Code: snapraid-diff.conf
    # This file is auto-generated by openmediavault (https://www.openmediavault.org)
    # WARNING: Do not edit this file, your changes will get lost.
    
    ...

    So it would be a lot nicer if it was possible to set this variable from the plugin.

    Great job on the snapraid plugin ryecoaaron and the snapraid diff script Solo0815 !

    I was just about to create a manual scheduled job until I saw the awesome report functions inside the snapraid diff script.

    I just feel like the "SCRUB_OLDER_THAN_DAYS" variable is essential and should be implemented into the pluging ASAP ;)

    I need to downgrade to bpo.2 from bpo.3. Is there an easy way to do that?

    If bpo.2 is not available (as it was for me) you can simply:
    apt install linux-image-5.4.0-0.bpo.2-amd64
    Then select the kernel to be the default boot kernel in OMV-Extras.

    I think that is related to @limac findings in his post 4, also this post would confirm that there could be an issue in relation to the arp cache.

    Sorry I don't understand what you are trying to tell me.
    Yes the problem seems to be that the arp is not cached properly and so far we thought that the current raspian kernel is the problem.
    But now that I have the same issue on a completely different architecture with different kernel version it seems to indicate that this is not necessarily the case...

    OMV ISO image might not base on Raspian OS

    Yes the image is based on Armbian.


    Although I'm not sure anymore that the kernel is really the problem...
    I installed OMV on an Intel x64 pc, as I was having all sorts of issues with my current setup on my Pi.
    So now I'm running OMV5 with kernel 5.4.0 (amd64) and I experience the same behaviour...

    @limac WOW thanks for putting in all that efford, you are amazing.
    This is exactly why I wanted to pin down the problem, so that we can open an issue at the right place :)


    Oddly iobroker seems to behave differently than pihole (by now I have the pihole container running as well).
    When I ping from inside my iobroker container to any machine it is reachable for every other machine aswell.
    So this was an easy fix as there is a ping adapter for iobroker that is usually used to check if certain devices are online, now I just set it up to ping my PC every minute and so far everything is working for a whole day already without issues :).
    (This also worked despite my PC beeing turned off for the last ~20 hours)


    EDIT:
    Just checked again and the ping adapter seems to be scanning the whole network, so that might be why :D

    So lately I've been struggling to get my containers to connect to my network over MACVLAN.
    This problem is also beeing discussed in this thread, but I wanted to make a dedicated one to get more awareness on this topic.


    So when setting up a container with MACVLAN it basically seems to work, but it is not reachable from other devices.
    A perfect example would be any GUI that the container might have, this is the case for Pi-hole and IoBroker as discussed in the thread that is linked above.
    A temporary fix is to log in to the container via console and ping the device you try to access the GUI from.
    After this the GUI can be accessed as usual, until after a while it can't be anymore and the process needs to be repeated.


    Another issue that seems to be related is that with IoBroker (used for home automation) I keep losing the connection to my MQTT devices. They work again when I ping from inside the docker, go into the GUI and restart the relevant adapter from there.


    This has not been the case a week ago, but I don't remember what update might have caused this behavior as I was trying to fix some other issue at the time and by now also have reinstalled the whole system.
    So the issue remains also after a reinstall and so far seems to be happening on Raspberry Pi's.


    TLDR:
    The GUI of containers attached to the network with MACVLAN are not reachable anymore.


    So the question is does anybody know what is handeling the MACVLAN driver, is it Linux/Raspbian, OMV or Docker?
    And maybe does somebody even know a solution to this? :)

    Macvlan driver is from docker as far as I know.I couldn't say anything since this is too bizarre for my limited knowledge..


    But glad it helped. Although I also noticed that you would need to re-ping your desktop after sometime to keep the GUI working :)
    Nice security feature huh? lol

    You are right, the only reason it would be portainer is if it didn't create the macvlan settings correctly.
    But as you also tried it in docker directly with the same result it won't be portainers fault.
    Anyway I'm happy you found a temporary solution as this was driving me nuts.
    I recreated my macvlan network in every possible way, also tried to ping the container just not the other way around :D
    So I was just ready to reinstall omv, as I thought maybe first have a look in the forums.... :thumbup: