Posts by BernH

    Still chugging along and getting more familiar with Linux. I really like OMV now and can even remember where to find things although I do tend to have a few pages open at a time. Not something you can really do in Windows server from what I remember from my brief encounter with it. I've had a brief look at UrBackup and it looks awsome. Probably more than I really need but it will be an easy way to keep everything tidy.


    All I need now is more time to learn it all :D

    urbackup can be as much or as little as you want really. You can define how often backups run and how many to retain. They can also be full or incremental. Those settings can be defined on the server and essentially pushed out to the clients, so management is easy.


    On my windows systems, I tend to keep a 2 to 4 monthly full image backups of the os drive, with weekly incremental images in between. I personally don't worry about the file backups, as anything critical is on the omv server as well as local, and the omv server gets looked after with it's own backups.

    Just after my last reply I had a reminder to have a robust backup policy. My sons server which is currently running Windows reported a lot of disk errors which I didn't feel happy trying to fix remotley. Turned out to be a good job I went to visit and pick up the server as an off hand comment of "oh it's been playing up for a while and I just turn it off" explained everything. No wonder there were disk errors, never a good idea to power off a running windows system. After a little gentle advice to not do that again as well as a "you should have told me" Anyway I did a chkdsk and then a restore from a backup which needed a boot from a recovery USB and it was all back up and running sweetly again. I use EaseUS Todo as it's free and works well.


    Now I just need to get back to seting up a good backup regime for OMV and then I can migrate my son's system from Windows to OMV. I've setup a small system to practice on and will be activley breaking that a few times so that I get to understand the full process.


    I've not had a chance to look at UrBackup yet but from what BernH and macom have said I reckon it will be well worth a good look.

    Glad to hear you are still chugging along. Hope you have been enjoying the learning experience, and as far as I am concerned, the best way to learn is to set it up and break it a few times, so I think you are doing the right thing.


    I have always hated windows server. I just find it so "clunky" to administer. When I started using Linux wherever I could a lot of that clunkiness and windows related issues went away, not to mention the fact that you can use a lot less in terms of hardware to get the same job done, but I think from our discussions in a couple of threads, you are realizing that too.

    Yep, you got it pretty much right. I wouldn’t worry too much about deleting all the data from an os drive, before imaging it, they are small, and if you are keeping all the docker storage off the drive and are not using sharefootfs to actually put user data on it, they remain small.


    Yes dd can image a drive that is running, although I will confess I have never used it. If I need an actual drive image I use clonezilla, since if you are in the need of a full image restore, you have to reboot anyway, and you don’t really need to run the os image often.


    The idea of having one to restore as a base and then an fsarchiver restore over it is that it is likely going to be faster to restore an installed image than it would be to download and reinstall from an iso, then do system updates. It is the overwrite with the the fsarchiver files that gets you back to the current state, so even the image restore is really only needed if you wanted to save a bit of time and not have to answer setup questions. You could just as easily install fresh then to the archive restore too.


    I will throw one more option out there for you as well. There is a project called urbackup that is a client/server backup solution. I use this to give a mac timemachine kind of functionality to windows (better than timemachine really) but it also has linux clients and the server side can run as a docker container. This has the ability to backup/restore over the internet, both drive image and file backups. I have never tried the linux clients though, but it might be an option to explore. File restores can be via a web page, but image restores do require booting into a thimb drive image they make. I don't use this for my omv backups, since it is the docker server for it for my windows systems, and hence my other option with fsarchiver.


    UrBackup - Client/Server Open Source Network Backup for Windows and Linux

    Ok, so there appears to be no residual configuration from the raid 0, so that is not the problem. Do as jgyprime suggests, as the issue is more likely hardware related or perhaps something happing with a docker container like jellyfin that was expecting to see the raid.


    If dockers containers were expecting to see the raid, they may have created the folder for the mount point when they started, but now with no data they may be having trouble.


    You could try to take the containers down to see if they are the problem.

    RAID on individual usb disks is not a good idea. differences in usb r/w can cause lots of problems, and RAID 0 is the worst because all drives have to be able to operate at the same speed for it to work correctly. If you need to use all the drive space, as I would suspect from your desire for RAID 0, you would be better off using mergerfs to pool the drives. With mergerfs, you would only loose the data from the failed drive, and only the drive that holds the data you need to access has to be active, while the other drives can be in an idle state. With a RAID all drives have to work at the same time because the data is actually distributed across the drives in chunks.


    I doubt very much upgrading to omv 8 will fix the problem and the problem may even cause the upgraid to faul. You would be better off fixing the issue before attempting an upgrade.


    That said, how did you "delete" the raid? Did you disassemble it or just unmount the filesystem? GUI or CLI? (You would have had to use the CLI to make it since usb RAID is not allowed in the GUI). Is the system hanging with the disks disconnected or did you re-create a RAID 0 and the hanging started after that?


    There are a few rules of thumb with RAID in general:

    1) Disks ideally should be RAID rated. (NAS or Enterprise drives)

    2) Disks ideally should be the same size, make, and model, and if possible the same firmware.

    3) Disks should be able to operate at the same speed (ie. mixing sata/sas/usb is bad, mixing controllers is bad, such as having some disks on motherboard ports and some disks on an expansion card, USB in general is bad because of the inconsistent chips, speeds, USB "overhead")


    What are the disks? (make/model/size)


    What is the output of cat /proc/mdstat


    What is the output of mdadm --detail --scan


    I don't profess to be an expert with mdadm under the hood, but have been using both hardware and software raids for almost 30 years. I can try to help and these things will be said and/or by all the mdadm folks here, so at least the information will be available even if I can't help.

    OK, found 2 thing:
    1. docker takes over the source folder with chown most likely - it changes the owner of the folder and group but this is not solving the problem

    2. when I map source and destination location via alias eg \\raspberryPi it cannot be copied directly, when set source as IP and destination as alias (both same credentials) it works 8| any ideas?

    Yes, docker containers take ownership of the files they create.


    That's the whole purpose of using containers that support PUID and PGID to specify ownership. Without that ability, the only other work around is to use the samba share's extra options to force the connection to read/write/behave with the same owner as the docker container with something like below, changing the stuff on the right of the = sign to the user, group, and permissions you need.


    The docker PUID/PGID is a better option though.


    Code
    create mask = <desired octal>
    directory mask = <desired octal>
    force create mode = <desired octal>
    force directory mode = <desired octal>
    force user = <user>
    force group = <group>

    I prefer to use normal windows nevertheless I checked few options and it started to work, not sure what happened exactly since I've changed from minium SMB version 2 to 1 and back to 2... maybe indeed some ACL issue it was and needed SMB restart? now it works as supposed to with inheritance enabled

    I can do what you say without problem. I do not use ACL


    If you don't need the granular control of ACL, don't use it. Regular linux permissions are enough,


    I doubt pcie speed throttling would cause drives not not mount. It eould cause speed issues, particularly with faster drives like an SSD, but likely not much of a difference to a spinning drive.



    For what it's worth I found the manual on that board. I's pretty useless, so no isea what the BIOS is like. The most recent BIOS update for it appears to be about a year ago.


    Any ideas about what specifically I should look for/change?


    I've already considered whether the motherboard itself might be defective in some way, but that doesn't seem likely to me so far, because the mSATA SSD on the board isn't causing any problems. The operating system itself is also running stably.

    I noticed that the BIOS battery voltage is relatively low. But I don't know how or why that would cause any errors (especially with the hard drives) during operation. (VCC= 3,34V; VTT= 1.50V; VBAT=2,39V; VTR=3.34V; in4=1.11V)

    Not really, I am not familiar with the board or what settings are exposed to you, but the settings would likely be related to PCIe and/or sata issues as that is where your problems are. Unfortunately a lot of those "off brand" oriental boards have some strange bios settings and the naming and descriptions get a bit "lost in translation"


    I would first check to see if there is a BIOS update though, as that could easily address odd problems.


    As for the battery, It is a bit low, but it shouldn't cause the issues you are seeing. I would plan on a replacement though.

    Using another server to take backups is not unheard of, and the idea of being able to just slide a drive in and have it work because of an unchanged uuid is not a bad thing, but personally I think the approach is a bit more than is needed.


    By that I mean, there is no need to have those spare drives installed and live in the 4th system. You could simply have a clonezilla or dd image of the drive saved that you can restore to any drive to create the filesystem and then restore data copies. If the dd or clonezilla was done after filesystem creation but before data was on the drives, it would end up very small if compression is used and very fast to take and restore. The idea is the same, but does not require a bunch of drives to be kept live when not needed.


    You would still need to physically get the drive from the server if you are doing a live clone, but by doing the image route, you would not have to as all steps of the restore could be done over the network if required as long as it isn’t physical drive failure, and no reboot into clonezilla if dd is used, although clonezilla is a little more user friendly.


    If the data backups are compressed with something like fsarchiver instead of just file copies, they would also potentially transfer over the network faster.


    just my 2 cents.

    The compose plugin's backup routine will backup the compose files and all mapped volumes, unless you direct it to do something else by using the #BACKUP and/or #SKIP_BACKUP taga on the mounts in the compose yaml files. Have a look ate the documentation linked below.


    I am not a lover of this backup routine, as it is just a data copy with no compression, so it can take up a lot of space on my live storage, which is why, as a daily backup I prefer to use my fsarchive script, which allows for compressed rotating version backups of the docker appdata, and my user files are cloned to an external RAID array. I find this gives me better mix of roll back/restore protection for the appdata, and the user files are also duplicated.


    omv8:omv8_plugins:docker_compose [omv-extras.org]

    You realize this is an OMV forum, which is based on debian, not a NixOS forum right? You will get a lot more help asking in the NixOS forums I would suspect.

    Docker works fine either way-plugin is convenient, but installing the engine manually gives more flexibility.

    How? If you leave the Docker storage field blank, the plugin does absolutely nothing with the docker installs meaning the exact same flexibility as a manual install.

    And further to what Aaron said, even if you change the docker storage field, it is only putting all the docker image and container stuff on another drive besides the OS drive, the same as if you manually edited the docker config file, so the containers can survive an os drive failure or re-install.


    You can still build and operate docker via the CLI if you want, but you loose the full GUI interaction. Even if it is installed, you can ignore the plugin for docker if you feel more comfortable at the CLI. However, the use of compose files and the "cleanliness" of the deployments and data is better with the plugin as you don't have to think about where you are putting your compose files or worry about manual backups with the scheduled backup routines, you can even schedule auto updates and auto prunes easily.

    From my memory when i was looking at glutun, it does not restart when a vpn drops, so incase either of you are not doing it, have a look at autoheal.


    It will auto restart any unhealthy container.


    Here is the compose for it, and a sample healthcheck set to use the wireguard wg0 interface that can be added to glutun container. If the healthcheck fails (a simple curl or ping) the container gets an unhealthy flag and autoheal should restart it. The healthcheck can be added to any container with a few modifications for the interface and desired site to curl or ping.

    Code
        healthcheck:
          # Choose one of the following 2 tests and adjust as required based on container requirements
          test: "curl --interface wg0 -sf https://example.com  || exit 1"
          #test: "ping -I wg0 -c 1 8.8.8.8 || exit 1"
          interval: 5m
          timeout: 10s
          retries: 3  

    There are plenty of online example examples of qbittorrent behind gluetun. Just look at them.

    I looked at doing this when I was looking for a replacement for dyonar before using that fork that you were doing for a little while. At the time I could not get wireguard to work in it, then you mentioned you were using trigus.


    I started using it on your recommendation, and it was good for quite a while. I had a tinyproxy container using it's network so I could use the proxy server settings in some things to get them on the vpn, but that broke in one of the updates, and I could not figure out the scripting required to open the ports, but I will confess, I did not spend much time at it.


    I run several other hotio containers (sonarr and radarr to name a couple because a switched from the linuxserver ones because they were having problems with hardlinks at some point), so I gave the hotio qbittorrent a look, and liked what I saw, with it having a built in privoxy, an easier firewall port manipulation if required, and python already installed unlike trigus. The only thing missing was an automatic caontainer restart in the event of a vpn drop/disconnect, which I look after by doing container healthcheck ping or curl through the wireguard connection, and if it tags the container unhealthy due to failure, I have an autoheal container running that will restart it and any ither container that is tagged unhealthy

    My VPN is IPVanish so I use their openvpn files. Basically I spin up the container to create the folder structure, spin it down, add the ovpn files/certs, then spin it back up.

    If IPvanish has wireguard configs, you may find that it is faster than openvpn. I use frootvpn, and noticed a 3 to 4 times maximum speed increase when I switched to their wireguard protocols instead of their openvpn

    Ok, that's a bit of a different problem. I have used dyonar, trigus, and hotio, and they all work.


    When the containers have a vpn enabled, they will go into a restart loop if the vpn configuration is not right or missing.


    First disable the vpn and start the container. Do you get to the login screen? you can log in with the temp password then set a permanent one to avoid having to look in the logs for a temp one every time.


    If you can do that much the container runs.


    Next, what kind of VPN? is it openvpn or wireguard based?


    If it's wiregurd, you need to put a wireguard config file in the wireguard directory. Once the config is in place, enable the vpn again and it should work.


    If it's openvpn, the the config is entered in the compose file's environment variables, as per the container's documentation.


    If it's something else like Nord with it's nordlynx stuff, I don't know how to set that up.