Beiträge von MarcS

    OK I found the solution: I had to manually create the folder '.snapshots' on the device that holds the shared folder for the new SMB share. Since then the errors stopped. It may have something to do with the fact that the shared folder resides on a BTRFS drive but it was an existing folder, not a new shared folder. Anyway, SMB also threw the following error


    Code
    Mar 14 14:09:42 smbd[3970722]:   access denied on listing snapdir SMB_Data_Device/.snapshots

    That error occurred while there was no .snapshots folder. After creation both errors stopped.

    I have recently created a new shared folder and defined it as SMB Share. Since then my syslog fills up with the error below, every second:

    Code
    smbd[3794506]: FSCTL_GET_SHADOW_COPY_DATA: connectpath /srv/PATH/Folder1/folder2, failed - NT_STATUS_ACCESS_DENIED.

    Any ideas how to fix?

    I want to upgrade my HDs. Is there a clever way to retain Shared Folders settgins?


    Current Setup:

    I have a BTRFS JBOT array wit 5 disks which presents 1 mounted Filesystem to OMV. On this device I have defined about 30 Shared Folders, all referenced in various OMV services.

    If I replace disks or restructure the JBOT array, I will need to remove all those references and those Shared Folders and afterwards re-create them all.


    ->I was wondering if there is a clever way to backup my settings, change the Array and then restore the settings.

    Thanks will chek it out. Does it offer server info (up down, ping, etc)? It seems that it's mainly a collection of web links for quick access.

    Hi guys - since there are many experienced SysOps here, I wanted to ask if anyone can recommend a web-based dashboard that shows the live status of multiple OMV hosts and multiple docker containers?

    Ideally I want to see everything in one place to see in one view if there are any issues.

    This should work across LANs and WANs.

    many thanks

    What!!?? 8| That doesn't seem like sound design to me. Docker containers can be torched and recreated in a matter of seconds. Data that containers collect or generate shouldn't be part of the create / destroy process.

    The container can be torched without causing problems to the mounted volumes but the Nextcloud upgrade procedure is an automated process that deletes all folders inside the container, and then downloads new files into the container. I also think its not a very sound way to structure the upgrades but maybe there are constraints from the NC side.

    Well, you'd have to admit that deleting 1TB of data from an available pool of 1.5TB of disk space, that is scattered across all disks in the pool, is not "normal".

    Well I agree but it can easily happen to Nextcloud users. The MEDIA folder was mounted as persistent to a Docker container whereby the developers forgot to mention that any upgrade of the NC container will erase that mounted folder (incl all data). So its not so unusual that a large amount of data can get lost. And I assumed that Snapraid is protecting me which was wrong.

    If lots of deletes happen on more than one disk, this has the same effect of simulating multiple partial disk failures. A single parity disk, alone, can not restore missing data across multiple disks. In the rebuilding process (along with parity); if data from one disk is needed to rebuild a file on another disk and that data is not present (it was deleted) that will result in an unrecoverable error.

    Thank you for pointing this out. This is absolutely essential to understand when using mergerfs and Snapraid together. I had exactly that problem and only realised when it was too late. I had 5x500GB data disks and 2x 1TB Partiy disks. When I deleted c. 1TB of media files (all jpgs and mp3/4's) which were scattered on 3 HDs, I was only able to recover around 60% of the data. All syncs had run perfectly well beforehand so I did not understand why my data was not recoverable, even with 2 parityt disks. It seems to be related to the point you are describing. What this means is that the two packages should never be used together and especially not in mergerfs MostFreeSpace mode.

    The configuration it creates only satisfies a small percentage of users

    agree


    In that case I would simply delete that button.

    If you delete it, we should really make a ref in the Plugin on howto split traffic as many users who are coming from OpenVPN to Wireguard will be looking for that type of behaviour. In OpenVPN its standard y/n setting: "Should client Internet traffic be routed through the VPN"

    I am not an expert either but I guess there are 2 main vpn use cases:

    1) vpn to hide your traffic while surfing on public wifi (e.g phone or latop) -> no split traffic

    2) vpn to log into your home servers remotely to do maintenance while offsite -> split traffic


    My guess was that (2) is the majority of users requirement but its just a guess.

    The average user probably expects to access the services of their network, not just those of the server, such as their router on another IP. With this configuration you will not be able to do it.

    exactly what i am saying.

    in my opinion there should be a field in which to add the local network range and an explanatory note instead of a single button. There is no point in setting 192.168.1.0/24 by default, many networks will not be like this.

    even better if we get a field to add our range. Then it would be really clear.

    ok - thanks. I can see the 2 options were discusssed. The current option just does not result in any vpn traffic at all. Not sure if others have used it.

    The 192... option would in my view be much better, as this is indeed 95% what users need (as stated in the discussion).

    Maybe let the users select Option 1 or Option 2 in the Plugin? At least they become aware that there are 2 options. It took me a few hours to find the reason for no traffic flowing..

    not sure what you are saying...

    My experience is: The omv plugin generates this line:

    >>>AllowedIPs = 10.192.1.3/24, which does not work.

    When I change that line manually (on the Wireguard client side) to "AllowedIPs = 192.168.1.0/24", it does work.

    I have not tried it, I have never configured a connection with divided network traffic that way. Actually when I configure this I always do it manually from the client and set the local network range, not the network range generated by wireguard. That is, 192.168.1.0/24 (or whatever network in each case).

    I guess if ryecoaaron developed it this way it should work too.

    but thats exactly what I am saying in my initial post:

    the OMV generated profile (10.192...) does not work.

    the manual changed setting (192....) works.