Posts by johnvick

    Tailscale will allow you to access the device by SSH easily without exposing ports. It will probably allow access to web interface but I have not quite mastered that bit. If you want to access other services such as Emby it's probably not the best solution. Lookup tailscale and magicDNS on Google.


    What may suit you better is the Linuxsever SWAG docker - it makes reverse proxying to apps like Jellyfin and Emby quite easy..

    Could not login today, after trial and error found OS drive - 250 NVME - was full. Freed up some space and logged in. Wondering where the space had gone I ran:


    And then:



    Huge discrepancy - any clues as to why this should be?

    I have Tailscale installed in a Docker and installed via their curl script on other Linux devices. I can ssh into all devices remotely from WIndows laptop with Tailscale installed with no ports opened on router except 80 and 443. On the other devices I can connect using magicDNS e.g. https://mydevice.mymagic-name.ts.net - but this does not work on OMV. I have added the Tailscale DNS 100.100.100.100 in the network settings but no difference. Not a huge deal as I can access OMV web interface remotely using SWAG reverse proxy - but why isn't this working?

    Another useful tip from the SnapRAID manual:


    "In Linux, to get more space for the parity, it's recommended to format the parity file-system with the -m 0 -T largefile4 options. Like:

    Code
    mkfs.ext4 -m 0 -T largefile4 DEVICE

    On an 8 TB disk you can save about 400 GB. This is also expected to be as fast as the default, if not faster."


    Therefore less need to worry about the data disks filling up as the parity disk is effectively bigger than them.


    SnapRAID

    just tried and hasn't solved problem.


    Before this I deleted all drives and readded with content and data on 6 drives and the other two parity. The conf file looks as it should.


    snapraid sync still does not add content file to drives 5 and 6, and drive 6 still is not mentioned in the end of sync report.


    The just added file would have landed on disk 6 due to mergerfs rules.


    I see disks 5 and 6 don't have a label is that relevant?

    I have just removed and re-added disk6 from the SnapRAID config page - the .conf file is updated. I added content to drive 5 - the .conf is updated. Bu these changes are not see on a sync:

    The .conf file has a weird name, could this be relevant?


    omv-snapraid-114a88d2-53ed-11ed-8eee-b3f2573b9c38.conf


    So the script seems to be irrelevant as doing snapraid sync from CLI also ignores disk 6 and the change to disk 5.

    The script is like the SnapRAID helper scripts you find but with more features it was written by a forum member (I think).

    GitHub - auanasgheps/snapraid-aio-script: The definitive all-in-one SnapRAID script. Diff, sync, scrub are things of the past. Manage SnapRAID and much, much more!
    The definitive all-in-one SnapRAID script. Diff, sync, scrub are things of the past. Manage SnapRAID and much, much more! - GitHub -…
    github.com


    I should have mentioned I tried sync from the CLI.