Beiträge von skittlebrau

    i would prefer a more built-in solution that is not relying on external one-man-show developers

    I get where you're coming from, but it's not really any different compared to something like TrueNAS which is owned/maintained by iX Systems. They as a company could go bankrupt tomorrow or could be acquired by a larger company that takes away the free version like what Broadcom did to VMWare yesterday. While the filesystem itself is portable, there aren't that many other NAS-focused distros like OMV that make ZFS management easy for people who aren't comfortable with the command line.


    SnapRAID + mergerfs is a combo that's been used by many people over many years. It's certainly not perfect, but they and OMV are under active development and that's the main thing.


    Whatever you choose, as long as you're in control of your data and have it backed up, then you have the flexibility to migrate to a different storage solution in future.

    I fail to see what the advantage of mounting a CIFS share in the host has - it adds an additional layer that complicates the setup.

    My understanding of containers (whether docker, LXC/LXD etc) is that it’s not advised, from a security point of view, to allow mounting privileges inside the container. Usually you have to run a privileged container or disable apparmor.


    I exclusively run only unprivileged containers so I only mount on the host and then expose to the container via bind mount because that’s best security practice.



    Why A Privileged Container in Docker Is a Bad Idea
    In this blog post, we will explore how running a privileged yet unsecure container may allow cybercriminals to gain a backdoor in an organization’s system.
    www.trendmicro.com

    TL;DR, setting "strict sync = no" in OpenMediaVault SMB settings may help to 'uncap' write speeds at the expense of a little more risk of in-flight data if there's a power outage or network interruption. I went from 70MB/s to 750MB/s write speeds via 10G ethernet on macOS.


    I don't know if this exclusively affects those using ZFS or not as I've only been using ZFS in my file server, but it'd be interesting to know if those using regular RAID or similar file systems can chime in too.


    -----------------


    I just thought I'd post this here for the benefits of others and people Googling to fix this problem experiencing poor SMB speeds on macOS from their file servers. It took me a long time to finally solve this long-standing problem.


    Usually when most people post about poor SMB speeds on macOS, people usually tell them to disable SMB packet signing, however this is no longer needed as the default in new releases of macOS is for it to be disabled. However, you can still apply this if you want to be doubly sure.


    I've experienced poor SMB speeds with macOS for a while now (completely fine in Windows) and despite common tuning methods given here, my sub-par SMB write speeds in the 70MB/s range persisted, despite being able to achieve 800MB/s speeds in Windows with 12-disks in a ZFS pool of mirrors.


    It wasn't until recently that I realised that macOS seemed to be treating all write operations over SMB as 'sync writes', which seems to be due to recent Samba releases changing the default setting of 'strict sync' from 'no' to 'yes'. While I know macOS doesn't use Samba and they use their own implementation, it still appears to honour the 'strict sync=yes' setting whereas Windows doesn't, hence the speed discrepancy.


    After setting "strict sync = no" in SMB settings it fixed the speed problem entirely and I went from 70MB/s writes to 750MB/s write speeds over 10G ethernet. The only caveat is that data transfers are at a bit more risk if there's a network interruption or power outage, so if that's a concern for you, then you might want to consider adding a SLOG device like a 32G Optane module to speed up sync writes and leave 'strict sync=yes' as the default.


    Anyway, just wanted to post this for others to refer to, because this issue bothered me for a really long time! If someone else can confirm this helps them, maybe it could be added to the 'Common SMB problems' sticky thread.

    If you do full disk passthrough into the VM, then you can restore your OMV backup onto it easily enough. You'll also need your VM to allow access to the HDD (internal or external) that you used to backup OMV to so that it can mount it.


    You just need to boot your VM using the CloneZilla ISO and follow the instructions.


    I did exactly this recently when I migrated from OMV baremetal to OMV in a Proxmox VM.

    If you only want file access, Seafile is a good option too and is a bit more snappy than NextCloud. NextCloud however has a few other features which you might find useful.

    I too spent the last couple of hours figuring out what the hell was going on. It was only when I saw the wrong public IP reported in Plex Remote Access settings that I realised. I did an nslookup on it and it was identified as my VPN server, which then drew me to the realisation that my haugene/transmission-openvpn container was set to 'host' mode and not 'bridge' mode and it was subsequently screwing up all of Plex's routing.

    Hi, I've had my OMV NAS running well for the past few months. I generally run only about 4 Docker containers for Plex, TVheadend, Crashplan and Resilio Sync. Typically the server is only accessed simultaneously by only 2 people at a time for Plex.


    At this point in time, I'm not likely to be transitioning to using a Hypervisor like ESXi or Proxmox so I'll be sticking with OMV on baremetal. The main reason for asking is because a friend is offering to sell me a 16GB kit (Crucial ECC 2400MHz UDIMM) for a good price, but I won't bother buying it from him if it's going to be a waste.


    I run a ZFS pool with just 2x 4TB drives in a mirror. I also have 6x 4TB drives in a mergerfs+SnapRAID array.


    Based on these memory stats, can I just stick with 8GB RAM for the foreseeable future? Having 1.3GB RAM free on average to me indicates that it probably would be fine unless I really wanted to hammer the server with VMs in future.

    Thanks for the guidance on btrfs-tools and btrfs-progs.


    Just a somewhat related question, is there any built-in monitoring tool for BTRFS like there is with ZFS's Event Daemon (ZED)?


    I'm using ZoL with the ZFS plugin for now for easier monitoring in the OMV GUI for just 2x4TB WD Se drives in mirror mode, but in the long term I would like to use BTRFS.