Beiträge von Steini

    Another thing I just came accross:

    I had a borgbackup user to send backups to my NAS via borg. The setup was done in parallel to OMV. The home folder is directly the backup folder in which there was a .ssh/authorized_keys file. Since OMV is now overtaking the SSH config that needs adjustment. The user was automatically added to group _ssh, however, the authorized_keys file was not used. I had to add the key via the webinterface to the user to enable login again.

    I read before to make sure all users are added to _ssh group, but did not know about the key files.


    [edit]

    Just saw that this change automatically added the borgbackup user to the "users" group. Maybe that was the important step? I can not remove it from the users group via web interface.

    I am using bare metal x86

    Hi,

    I just want to report my challenges during upgrade if it helps someone. Challenges mainly are due to custom modifications I did to the system (as always).

    After executing the upgrade it told me to restart but to keep an eye on installing openmediavault-md. That failed because package "openmediavault" is still 6.x. That got my attention and it looks like a problem is the wsdd package:


    Code
    Die folgenden Pakete haben unerfüllte Abhängigkeiten:
     openmediavault : Hängt ab von: wsdd (>= 0.7.0) aber 0.7+gitc87819b soll installiert werden
                      Hängt ab von: systemd-resolved soll aber nicht installiert werden
    E: Probleme können nicht korrigiert werden, Sie haben zurückgehaltene defekte Pakete.

    I then downloaded openmediavault-md and installed it without dependencies which was stupid and should not be done that way :)

    The solution was to force the installation of wsdd=2:0.7.0-2.1 over wsdd=0.7+gitc87819b (apt install wsdd=2:0.7.0-2.1). I don't know the cause the upgrade failed, but it might have been one of my apt preferences setup.

    Login to the webinterface did not work (probably because the omv package failed during the upgrade and the wrong config files were deployed), error was:

    Code
    2023/12/10 21:45:32 [crit] 6046#6046: *18 connect() to unix:/run/php/php7.4-fpm-openmediavault-webgui.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.10.57, server: openmediavault-webgui, request: "POST /rpc.php HTTP/2.0", upstream: "fastcgi://unix:/run/php/php7.4-fpm-openmediavault-webgui.sock:", host: "my.domain", referrer: "my.domain/"

    I thought a reboot might fix it, however, the system did not come up again but I don't know why. It is headless and did respond to ping but all ports were closed. A hard reset brought it up then. But web still not worked. The thing it probably fixed was omv-salt deploy run phpfpm, but before I also made also deploy run nginx and webgui.


    Now I am wondering if there are many other salt deploys missing from the upgrade because the upgrade of the openmediavault main package failed in the first place. FOr example, my VMs are coming up fine but the are not listed by the kvm plugin:

    Code
    Invalid RPC response. Please check the syslog for more information.
    
    OMV\Rpc\Exception: Invalid RPC response. Please check the syslog for more information. in /usr/share/php/openmediavault/rpc/rpc.inc:187
    Stack trace:
    #0 /usr/share/php/openmediavault/rpc/proxy/json.inc(95): OMV\Rpc\Rpc::call()
    #1 /var/www/openmediavault/rpc.php(45): OMV\Rpc\Proxy\Json->handle()
    #2 {main}

    For now everything essential works and I will have a look at the remaining stuff in the next days.


    Cheers

    That is a very general question and there are probably as many answers as users.

    I have 4 separate VLANs:

    a) management (network core devices, switches, router, voip)

    b) private (omv, PCs)

    c) IoT (all local IoT stuff, internet access only for explicit devices and ports, no access from other devices)

    d) Guest (basically DMZ, everything I do not trust: Guest Wifi, IoT Devices which need to talk to the cloud, TV,..)

    e) technically there is a fifth vlan just with my upstream manufacturer router and LTE failover

    Routing is done with OpenWRT (for historical reasons)


    Basically, in private (LAN) vlan I have only devices I trust, so no Windows, no MS Office, no closed source devices which talk to the internet.

    Unfortunately, the vast majority of network devices still assume that the greatest danger comes "from outside". However, if a computer is infected, the attack continues from the inside. Everything that can be reached from this computer is then at risk. And then you have to think about what risks and what protection you are willing to take. If it's okay that all computers and the NAS are encrypted because my TV manufacturer doesn't make updates, then you don't need all that.

    Hi,


    I thought I would give the upgrade to OMV7 a try. Very reasonably the installer asks to make sure all used plugins are available for the new version. Ignorant as I am, I ask myself what is the best way to do this? Can I search the repository files for 7 versions for each plugin or is there a summary or similar (I remember there was such a thing for omvextras for the 6 upgrade)?


    Thanks!

    You should not remove things from the root directory (/root/) except you put it there and know what it is.

    You can delete old logfiles in /var/log/

    I want to migrate OMV from the 16gb USB OMV to a larger 64gb but seems like It doesn't gonna work the easy way.

    I don't know what you mean by "easy" way, but you can just mount the hard drive in another PC and copy it to the new drive.

    Assuming /dev/sde is your old drive (16gb) and /dev/sdf is your new drive, use

    dd if=/dev/sde of=/dev/sdf bs=32M to clone it. Afterwards expand your file system on your new drive.


    Could it be that root access is just deactivated? Do you have an unprivileged user account for login which can use sudo?

    Hi,

    I apologize that I did not read through all the posts in the thread so this might have already be mentioned:


    1) You can get a <3€/month very small VPS from a hoster to run a VPN server (like wireguard). Connect from both your NASes to this server and they are in the same network.


    2) (my approach) Use syncthing (https://syncthing.net/) to synchronize between both NAS. You can use a local borg-backup (encrypted) and sync this backup folder via syncthing. Syncthing works if one of your server is reachable.

    (I have a small PI with an external hard drive at my parents home which only stores encrypted backups via syncthing.

    Somehow the system thinks your last update was done at Thu Oct 17 2069 07:43:07 GMT. So it does not want to have new data that looks like being 40 years old ;)

    If you do not need the old data, the easiest would be to recreate your rrd database.

    now i see i have the same sources in 2 lists, where should i remove them?

    Remove it in /etc/apt/sources.list. As was written above:

    The update should have removed the security repo from /etc/apt/sources.list and put it in /etc/apt/sources.list.d/openmediavault-os-security.list.


    My (working) files look like this now:

    Code
    # cat /etc/apt/sources.list
    deb http://deb.debian.org/debian buster main contrib non-free
    deb-src http://deb.debian.org/debian buster main contrib non-free
    
    deb http://security.debian.org/ buster/updates main contrib non-free
    deb-src http://security.debian.org/ buster/updates main contrib non-free
    
    deb http://deb.debian.org/debian buster-updates main contrib non-free
    deb-src http://deb.debian.org/debian buster-updates main contrib non-free
    Code
    # cat /etc/apt/sources.list.d/openmediavault-os-security.list
    deb http://security.debian.org/debian-security buster/updates main contrib non-free
    deb-src http://security.debian.org/debian-security buster/updates main contrib non-free

    Welche Infos kann ich noch beilegen, damit ihr mir einfacher helfen könnt?

    Was passiert denn beim Aufruf von "https://your-ip:81"? Kommt eine Fehlermeldung? Wird die Nextcloud angezeigt? Bekommst du einen Timeout? Bist du sicher, dass du httpS verwendet hast und nicht http? Ist die Nextcloud mit HSTS konfiguriert?

    Was sagt

    Code
    systemctl list-units --failed
    systemctl status nginx.service
    tail /var/log/nginx/error.log

    I am however not able to boot after writing the .gz file onto this SD card.

    dd is so powerful because it does a block-by-block copy, regardless of the filesystem types or operating systems. It looks like you packed the resulting file with gzip afterwards (therefore the .gz ending). To restore, you need to unpack your image again and copy it block-by-block to the sd card. So your restore command depends on the way how you have done the backup.

    You could use a linux live cd and use the dd command like

    Code
    gunzip -c /path/to/backup.img.gz | dd of=/dev/yoursdcard


    If you want to use windows, there are reports, that this can be done with rufus: https://rufus.ie/

    You might need to unpack your gzip before (for example with 7zip) and then write the image to the sdcard.

    In the past my NAS hard-drives were running all the time. However, over time the need for the NAS has decreased. It is now mainly used to backup my workstations once per day and in the evening to serve the home cinema. So I decided to regularly spin down the hard drives for noise and power reduction. (Not sure if that is a good idea or if I should spent the extra € and let them run ..) I don't want to shut everything down, because its also used for other services (database / printer / scanner ..)


    However, now every time a PC mounts a shared folder (NFS or SMB), the hard drives start spinning. In addition some clients freeze until they can read data from the drives. (for example, I have a NFS folder in my 'favorites', now every time an open file dialog appears, it tries to read that folder even if it is not opened).

    Is there a way to improve this? In principle the drive would not need to start spinning just to list the directory. I was thinking of having the main shared folders on the main SSD and the data in some subfolders on the hard drive which then would not start by mounting. But that seems to be quite complicated with the OMV world. Next idea was to use caching on the client (like fscache for linux). Worked quite well in the beginning, but I had read errors. I will investigate this better in the future. For the windows client this did not work. The mac client somehow does caching anyway and works quite well out of the box.

    Now I was thinking if I can cache the file system information on the server, so that the hard drives only start when an actual file is being read. I am new to this topic. Does this exist? I have read about bcache but I am not sure if that would help in this case. Has anyone done this?


    Hope my problem became clear ;)

    Thanks!

    Johannes

    Ok but how can I limit the size of the backup? Time machine makes an incremental backup and as long as there is space the backup grows

    [Edit]
    Found the solution here: https://www.reddit.com/r/homel…chine_backups_on_a_samba/


    To set quotas you need to put a .com.apple.TimeMachine.quota.plist file:


    Code
    cat <<< _EOF_ > /srv/backup/timemachine/$USERNAME/.com.apple.TimeMachine.quota.plist
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>GlobalQuota</key>
    <integer>300000000000</integer>
    </dict>
    </plist>
    _EOF_

    Size is in Byte, so 300GB in this example

    When I read the topic with "RPi" and "Raid" it was clear I would need popcorn...
    Many people setting up raid do not know how to handle them in case of a problem (at least I didn't know that when I first used a raid). Therefore, the support forums are full of people crying for help because their data is lost. So again, most people talking about raid have heard somewhere, that it is cool/save/fast/etc.. but don't know it. I wanted to have a raid because of speed. However, nowadays it is bullshit, if you want speed, just use a big SSD, which can easily saturate Gbit ethernet and soon 10G ethernet.
    If you want to sleep better thinking at your NAS, do not forget about Self-Monitoring, Analysis and Reporting Technology (SMART).


    Thankfully, SMART capabilities have become so powerful that all my drives which have failed in the last years (decades) did not fail silently, but were getting more and more errors over time. You could literally watch them aging before dying. At least, it was enough time to update the backup and plan the swap the drive without any downtime. That only works, if there is no single event which kills your hard drive (like power outage, fire, water leak, theft ...). However, in such an event, chances are high that all drives in that device fail at once which would also kill every raid.


    Invest the money you would spent on a hardware raid into a "real" PC. USB-powered hard drives might not be the best solution. It can work, but the risk is high, that there will be voltage drops or spikes from the USB controller, which the drive does not like. You do not want your controller to be the cheapest part of your setup (if the data is important)


    So:
    1) Automatic backup solution (no one manages to make manual backups as regularly as needed)
    2) Regular backups (depends on your data, hourly/daily)
    2) Use good Hardware to prevent failures (UPS, proper power supply (also USB drives don't like fluctuating power), ensure temperature being in recommended range..)
    3) Monitor your system/hardware (SMART) to detect aging before the drive fails

    This problem has to be solved by the owners of nic.funet.fi. You could leave them a message.


    You have two options:
    1) Workaround: apt-get -o Acquire::Check-Valid-Until=false update
    2) Switch to a different mirror
    Have a look at https://www.debian.org/mirror/list, choose one of the primary mirrors. You can ping them first and see which one is the closest to you. Anyway, every mirror from europe should be good for you. Probably you wont't experience a difference if you use a US one..


    [edit]
    The packages are the same, no matter in which country the mirror is, if that was the question

    Yes, but not via the web interface (there is no plugin). Note that OMV uses nginx, and most web interfaces require apache.
    I would suggest the installation of docker and then use the docker package of the software you like (SVN or GIT server).


    Die svdrphosts.conf habe ich bereits angepasst. (Falls das überhaupt notwendig ist)

    Das Anpassen sollte notwendig sein (wobei ich nicht weiß, was das OMV-Plugin evtl setzt).
    Da sollte also sowas wie "192.168.36.0/24" entsprechend für dein subnetz stehen.


    Hast du den VDR nach der Änderung neu gestartet? Steht in den Logs was? Was sagt "vdr --version"

    Wenn ich dies eingebe kommt folgende Meldung:


    Last login: Sun Oct 15 20:31:11 on ttys000
    Didi-iMac-3:~ Didi$ ssh root@192.168.2.109
    ssh: connect to host 192.168.2.109 port 22: Connection refused
    Didi-iMac-3:~ Didi$


    Die Fehlermeldung bedeutet, dass auf deinem NAS kein SSH server auf Port 22 läuft, oder dass der root login nicht erlaubt ist. Das kann mehrere Ursachen haben. Bitte überprüfen:


    - Die IP stimmt
    - SSH ist aktiviert (im Webgui: Dienste -> SSH)
    - Root login ist aktiviert (gleiche Seite)


    Nach dem starten von SSh und/oder dem aktivieren des Root logins, müsstest du dich einloggen können. Das Passwort wäre halt noch wichtig, kennst du aber hoffentlich ;) (evtl mal mit dem gleichen wie fürs Webinterface probieren)


    Alternativ:
    Wie donh geschrieben hat: Tastatur und Monitor dran, und da einloggen (Benutzername "root" und dein Passwort. Achtung, bei Linux wird bei Eingabe des Passwortes nichts (keine Sternchen) angezeigt)