Beiträge von MarcS

    Hi - I came across a Wireguard setup issue regarding the 'restricted' option in the OMV GUI. I was wondering if others have encountered the same:


    As I understand the 'restricted' option, it enables the Wireguard feature, that only the vpn client's network traffic to the VPN network is routed through the tunnel, while all other client traffic (e.g. Internet) is routed on the clients default route (outside the vpn tunnel). This is important for performance considerations.


    So when I setup a Wireguard client profile in the OMV GUI with 'restricted' opion ticked, OMV generates an additional line (AllowIPs...) in the profile as per below:

    Code
    AllowedIPs = 10.192.1.3/24

    This line is supposed to prevent non-vpn traffic to be routed through the tunnel. However, this spec does not work as expected. When using the specified VPN network 10.192.1.3/24, no traffic flows at all. Only when I manually change this line to specify the client network (e.g 192.168.1.0/24) it does work and now only client vpn traffic is routed through the tunnel.


    has anyone encountered similar?

    Where can I see if the monthly OMV scrub is running or disabled?

    thanks - thats great.


    Another useful feature, if I may suggest, woudld be a "Roll-Back" button within the OMV GUI, that basically initiates a restore of the folders to the state of one particular Snapshot.

    I now know the root cause of the error message: when clicking the "share" icon on one of the snapshots in OMV, it creates a new shared folder. The new shared folder is created under the snapshot subvolume. Consequently there are nested subvolumes which create an error message when the automated OMV cleanup script tries to delete old subvolumes.

    I dont want to transfer content to a diffeent folder (with different name) because the folder name is used in docker containers. So I need to somehow turn the legacy folder into a subvolume. Is that possible?

    e.g.


    Oldfolder---data

    Oldfolder (=now a Subvolume) ----data

    the command you requested resulted in NIL output (post 14)

    Then I tried a few more (post 15 and 16) and found a nested subvolume, which I manually removed.

    Now the errors have stopped.


    I have no idea how that subvolume got there though.

    Did some more digging and there seems to be a problem with a nested sub-volume.

    There is a subvolume

    ID 1878 gen 200649 top level 1877 path OMVmoving@hourly_20231029T010001/OMVmoving


    I dont know how this subvolume got created but it becomes visible with "btrfs subvol list .snapshots"

    btrfs subvolume list -o /srv/dev-disk-by-uuid-786fbfe4-2782-4b41-b28a-9ec8130edb8d/.snapshots gives the following output:

    Code
    root@elite2:/srv/dev-disk-by-uuid-786fbfe4-2782-4b41-b28a-9ec8130edb8d#  btrfs subvolume list -o /srv/dev-disk-by-uuid-786fbfe4-2782-4b41-b28a-9ec8130edb8d/.snapshots
    ID 290 gen 49329 top level 289 path .snapshots/LIVE180_20230826T151017
    ID 1877 gen 205521 top level 289 path .snapshots/OMVmoving@hourly_20231029T010001
    ID 2049 gen 202888 top level 289 path .snapshots/LIVE180@daily_20231104T160808
    ID 2058 gen 203770 top level 289 path .snapshots/LIVE180@daily_20231105T000001
    ID 2070 gen 205076 top level 289 path .snapshots/OMVmoving@hourly_20231105T120001
    ID 2071 gen 205186 top level 289 path .snapshots/OMVmoving@hourly_20231105T130001
    ID 2072 gen 205300 top level 289 path .snapshots/OMVmoving@hourly_20231105T140001
    ID 2073 gen 205410 top level 289 path .snapshots/OMVmoving@hourly_20231105T150001

    Any ideas about the

    /etc/cron.hourly/openmediavault-cleanup_sf_snapshots:

    error.

    ERROR: Could not destroy subvolume/snapshot: Directory not empty


    It still appears every hour, even after upgrading to latest OMV version.

    I also noticed that my mergerfs pool is not mounting. The pool has two disks


    Pool1
    /srv/dev-disk-by-uuid-01cc26c2-19ae-48f8-ab41-377f9028a8cf

    /srv/dev-disk-by-uuid-52c56732-4fd7-4cc3-ac37-64760ac981ca


    and this is the error message:

    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; systemctl restart srv-mergerfs-Pool1.mount' with exit code '1': 
    
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; systemctl restart srv-mergerfs-Pool1.mount' with exit code '1':  in /usr/share/php/openmediavault/system/process.inc:242
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/mergerfs.inc(203): OMV\System\Process->execute(Array, 1)
    #1 [internal function]: OMVRpcServiceMergerfs->restartPool(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('restartPool', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Mergerfs', 'restartPool', Array, Array, 1)
    #5 {main}

    output: