Posts by wolffstarr

    Throwing my hat on the pile here. I have no problem with getting error notifications, so long as I can decide that they don't matter to me and then I can turn them off. I don't use BTRFS, I'm the only user on the system, I don't use internal SSL certificates for anything, and the pending-config yellow banner is annoying enough that I always apply changes anyhow.

    Please make a way to disable the annoying messages. Preferably via the Notifications section under System, instead of having to go into CLI and adding environment variables. I would really, really rather not have to go in and delete cron.daily files every time OMV updates.

    Got it in one, thanks Zoki. What that was, was the shared file for my docker config dataset, that used to reside on a pair of SSDs, both of which failed at roughly the same time. I pulled the dead drives and restored from backup, and then just did a symlink from the old dataset to the new backup and left it at that. That was like 6 months ago too, so clearly I haven't made any Samba changes since then.

    Thanks for the fast response, appreciate it.

    Did the system upgrade to OMV 5.6.25-1, and was prompted to Apply config changes afterwards. Apply failed, but afterwards all of my SMB shares were unavailable. Tried clicking Revert, but no change. Tried restarting, getting red text in systemctl status smbd.service for journal entries. Reinstalled 5.6.25-1, still failing. Additionally, from what I'm seeing in journalctl -e Samba is looking in /var/lib/samba/usershares for .conf files matching the names of the shares on my system, and it doesn't find them. I did go there and confirm that the directory is empty.

    To be clear, the OMV UI still shows all of the SMB shares actually there and properly configured. Any change I make that does not involve SMB works fine and the apply takes. Any change to SMB fails with the below error. Had to go through PasteBin as it seems the error itself exceeds the permitted word count.

    Any help would be appreciated - it's not critical yet, but that's just because my wife hasn't tried to access it, and while I can get at her files, I'd rather not. If it matters, specs are Ryzen 9 3900X, 128GB RAM, lots of storage in ZFS. Everything except SMB works fine, including KVM, Docker, and NFS.

    Depends on the motherboard, really. There really is an element of "you get what you pay for" in terms of both features and hardware quality. If you have a higher end board, then it may well handle "neglected" conditions better, the driver support for onboard hardware is likely to be a little better, and you're probably less likely to experience crashes.

    The real difference though tends to lie in features. What does the board give you? A cheapo motherboard will work fine for the most part for just a basic NAS, but if you want to you can get a server-grade motherboard which will give you additional functionality, like remote administration/KVM access through IPMI, a far larger amount of drive connectivity, or additional CPU support (dual-socket). For consumer boards, if you go with, say, an A320 board over a B450 for AMD, I believe you have fewer PCIe lanes available in general for supporting drives, network cards, etc.

    Getting an error with apt running for the buster/main repo for omv-extras - unexpected file size. Error is below. This came up with the automatic apt update and no changes have been made since the apt update Wednesday, which was for containerd, docker, and the openmediavault package so I'm assuming it's a corrupt file on the repo end somehow.

    This is doable. I've got a couple of different things setup, but the best one is Traefik. I've got two networks configured in Docker, traefik-proxy and outside-services. Outside-services is configured as macvlan, and traefik-proxy is a standard bridge network. Anything I want to be only accessible through the proxy goes solely on traefik-proxy. Traefik, on the other hand, goes on both traefik-proxy and outside-services, and is assigned an IP address. The config with docker-compose looks like this:

            container_name: traefik-1
                - 80/tcp
                - 443/tcp
                - 8080/tcp
                    ipv4_address: ''

    Now that's my example, I would expect you would do something similar with the macvlan network for your ISP connection and your bridge network for all your other containers. Also remember that expose is generally used for opening ports on macvlan networks and not for mapping on bridge networks, so if you're translating to standard docker run for that you probably wouldn't want to do any port exposes for a download-only client.

    If I try and remove sysv-rc it then just installs openrc at the same time, so still stuck :D
    Also it then starts doing a lot of slightly scary stuff with service runlevels and init.d including saying:

    *** WARNING: if you are replacing sysv-rc by OpenRC, then you must ***
    *** reboot immediately using the following command:                ***
    for file in /etc/rc0.d/K*; do s=`basename $(readlink "$file")` ; /etc/init.d/$s stop; done

    Um. Okay, I did some digging and thinking on this and it wasn't sysv-rc or openrc that I was uninstalling, it was insserv. Since systemd-sysv does exactly the same thing as insserv according to a couple of different sources, and since it's recommended they NOT be installed alongside each other, you should be fine once that's uninstalled. Do research it of course, but consider this bug report to Debian, where it's mentioned they shouldn't be installed alongside each other.

    Great program! Thank you. Just a heads up for people that may have been wondering the same thing as me as to why their theme isn't changing and staying the same as the first one they changed to. I was using Chrome, but this likely can occur with any other browser. You'll have to clear your browser data to see the change, as the website is cached (if that's the correct term) to save on having to reload it fully, so you won't see the theme.

    You can actually get it to reload with Shift+F5 and it will reload the whole theme.

    Thanks. Since I am moving from freenas to OMV I can decrypt my pool and start using OMV while waiting for ZOL 0.8 (with native encryption to hit the Debian 9 repos) - encryption is not "needed" but after having had it for about 5 years I feel naked disabling it

    I'm currently trying to figure out the right hardware (upgrades) and I saw some announcement about OMV 5 moving to BTRFS ( ) and a lot of discussions going on there about ZFS so I was wondering whether to start with OMV 4 with the ZOl version supplied by Debian 9 or with OMV 5 with ZOl supplied by Debian 10?

    You really don't want to go to OMV 5 - it's early Beta, most of the plugins (last I checked, a couple of weeks ago) weren't anywhere near ported, and there's still a long way to go.

    That said, the BTRFS change was pushed back to OMV6 at least, so won't be much of a worry.

    That was @subzero79 fixing a bunch of stuff. Not sure why we didn't release that. It is not prep for 5.x and I think it should be used. Actually, the only thing that needs to be ported to 5.x is the email notification stuff. The plugin should just work as it is.

    Okay. I have no idea whatsoever what I'm doing, so working off his would be best if it's functional, but I didn't want to go down that path without making sure I wasn't basing on the wrong thing.

    @ryecoaaron, I noticed poking around that there were a ton of commits made to the plugin back in October 2018 that aren't part of the currently used plugin. I was toying with the idea of trying to rework some stuff (in my fumble-fingered way) but I'm not sure if that was stuff that was prep for 5.x, or if it was a partly-completed rewrite, or what. Any idea whether it's a partially complete rewrite or something prepping for 5.x?

    Really appreciate your confirmation. I assume the same applies to the mdadm RAID?

    No, MDADM is a core part of OMV, so it's done by Volker as part of the project. Personally, I can't speak to whether or not you can do a drive replace via the Raid Management Interface, as I've never used it - I started out running JBOD and leaped straight into ZFS when the plugin first came out.

    You are correct, there is not a way through the UI to replace a failed drive as far as I can tell.

    Keep in mind that the folks who created the ZFS plugin were not able to actively maintain it and didn't have tons of coding experience to begin with, so they may not have thought of this. This is one reason why the UI has a few odd display options (specifically, snapshot display).

    Yeah, I wasn't sure if cron-apt would output anything on the command line, but I figured worth a try. Output below, email masked to protect the guilty: