Beiträge von Rd65

    Does that make sense? Jessie is end of life since more than one year.

    it think it make sens because it fix issues and give time to migrate - if you want migrate.
    The issue is a wakeup call... but if you don't want migrate, you can stay on jessie and live with old software. its a decission of the root, not of paket maintainers far away to migrate.
    But I would also recommend the update.

    it may be a better solution to (re)write a plugin to manage the "networkmanager" config tool which is linux standard.
    it manages all kinds of netlinks including wifi and wifi-ap, bridging, lan, vpn, wan, ipv6... and its easy to configure.
    you can use networkmanager now by disable the interface in omv but you will not see any statitics about your net-device. (depends on monit-config i think.)


    Networkmanager configures all Links not setup in /etc/network/interfaces so you need to remove all entrys excluding lo ... (thats the reason to disable netconfig in omv too) ...


    Bei mir läuft eine Gigablue quadplus auf OpenATV (es geht aber auch jede andere Distribution) incl. Platte mit nfs und samba Fileserver Diensten, in omv nutze ich das omv remote-mount plugin um mir das Share vom Receiver erst im Nas einzubinden, um es dann wieder über omv als Smaba Share anzubieten. So habe ich auch das Share von der Fritzbox eingebunden und beide Shares im Nas und damit alles unter einer ip/Account in omv zur Verfügung, evtl stelle ich das aber noch auf schnelleres nfs um. Bei der Fritzbox muss man ein wenig tricksen da sie nur cifs/smb1 kann, auf dem Receiver läuft aber ein normaler Samba server, via nfs/ftp/scp geht das bei der Box aber auch. Möglich wäre zudem auch ein Diskless Receiver, der sich ein Share (den Filme-Ordner des Nas oder der Fritzbox) einbindet und darauf aufzeichnet. Da ich aber nicht will, das alles an Mediendiensten aus ist wenn das NAS in den Standby geht, nutze ich den Weg mit lokaler Platte im Receiver und Remote-share auf dem Nas respektive eigenen Shares auf Receiver und Fritzbox. Ich hatte mal überlegt, omv auf den Receiver aufzusetzen aber die Entwicklerphilosopie von omv und openATV vertragen sich leider nicht sonderlich gut. Ist aber auch nicht tragisch, so macht jede Kiste ihren Job, kann aber alles vom Nas aus erreichen und alles ist gut. DLNA als Service läuft hier auf alle 3 Maschinen.


    Was ich leider noch nicht hinbekommen habe ist dem Plex Server (nicht durch omv kontrolliert, also ohne das plexplugin/docker) auf dem Nas bei zu bringen, das er den Receiver mit OpenATV/Sat als Quelle für LiveTV/DVR nutzen soll... irgendwie fehlt auf dem OpenATV da wiederum ein Plugin... aber das fummel ich mir noch zurecht. Für die Fritzbox am Kabelanschluss gibts auch Receiverfunktionen über den dvb-c Repeater...aber dazu fehlt mir jedoch der Kabelanschluss um das mit Plex zu testen. Vielleicht kriegt mein Nas statt dessen noch eine SatReceiver-Karte verpasst und die Gigablue fliegt raus. Mal sehen...

    ok, so in my opinion, the raid is not part of the problem.
    But you can test more things.
    install #apt install iperf3
    on your server and on a client depend on os.
    https://iperf.fr/iperf-download.php#windows
    but you can use android, mac or windows too.
    its a command line executable without gui.
    start in a shell #iperf3 -s
    this will be your server.
    now connect with
    a shell, cmd, cli or what ever you use as terminal on your client:
    #iperf3 -c server-ip
    and you will see statistics.
    try #iperf3 -? or -h, you can change blocksizes and more things.
    You can connect local (to your own ip) or you connect via ethernet, wlan or whatever.
    connecting to your own ip (in a second shell) may test the local networkstack, connecting to a server behind a switch tests the switch and the network hardware on both sides.
    you may change the role of client and server... (and open the Win firewall or disable for iperf3 at first)
    now you are able to test your network, including network adapters without trouble from other devices.
    if you identify slow connetions, or broken hardware, you may change networkhardware or doublecheck the configs.
    If it is all fine, the the problem is not the network.
    iperf3 is your friend to testing throuput and reliability of networks!
    use it


    the upper part shows statistics from client side, the lower part are server statistics.
    Lots of Retrys show problems... slow or variant speeds show problems.. and so on.
    This is my wlan link.


    $ iperf3 -t 60 -P 100 -c nas


    this wil test 100 parallel connections for 60 Sec on a 1 GB Line.
    and it say:


    [SUM] 0.00-60.00 sec 6.54 GBytes 937 Mbits/sec 0 sender
    [SUM] 0.00-60.00 sec 6.52 GBytes 934 Mbits/sec receiver
    + 1 mp3 music stream from plex :)


    Take and a look on your cpu stats..
    top - 22:16:34 up 2 days, 21:09, 2 users, load average: 0.67, 0.40, 0.28


    the load should not go through the ceiling


    an older but common way to do these testings is using the echo service on inetd:
    take a look at https://en.wikipedia.org/wiki/Echo_Protocol
    all *nixes support that, but need to install some software too. but iperf3 is the easy way to do that.

    SATA - /dev/sdX
    SSD - /dev/sdX
    SCSCi - /dev/sdX
    IDE - /dev/hdX


    even a usb3 mounted 8tb disk will be shown as /dev/sdX


    use
    #apt install logwatch
    and
    #logwatch --output mail --format html --detail med --range 'between -7 days and -1 days'
    as example
    or
    #logwatch --detail low --range 'between 4/23/2019 and 4/30/2019'|less
    ..

    Try to decompose your Problem by separating Net and Raid and testing them separately.
    Use iperf for net trafic tests and use dd localy for big files on raid.
    check at firt your raid:
    #dd if=/dev/zero of=/srv/path-to-raid/sample1G.txt bs=1G count=1
    and
    #dd if=/dev/zero of=/srv/path-to-raid/sample10G.txt bs=1G count=10 status=progress
    and
    #dd if=/dev/zero of=/srv/path-to-raid/sample100G.txt bs=1G count=100 status=progress
    this creates 3 files in 3 sizes, 1G ,10G and 100G.
    so you can check if the raid is working.
    then... read about testing iperf...https://iperf.fr/ or https://www.linode.com/docs/ne…e-network-speed-in-linux/ as example.

    the easyest and savest way to do backups ... dosn't matter which *nix or omv system running .. is dd ... as used in omv-backup.
    this is 40 years of proven backup strategy... and it works just in 40 years... you can do it on shell, by any Linux live Distro, from cd or usbbootstick or pxe, with the backup web plugin, clonezilla and acronis do the same...



    #dd if=/dev/sda1 | gzip > /tmp/image.gz
    #gunzip -c /tmp/image.gz | dd of=/dev/sda1
    this is all you need to do full working backup and restore - on every Linux System. The first backups, the second restores.
    If you do a init 1 in the ssh shell before you dd your root disk, you are on the save way aganist open/variant files.


    even you can backup/mirror between Servers...
    #dd if=/dev/sda|gzip| ssh root@target 'dd of=/tmp/image.gz'
    #dd if=/dev/sda|bzip2 -c| ssh username@servername.net "bzip2 -d > dd of=/dev/sdb"


    It is not a good idea to backup only specific Software like omv... you will nerver try to backup single Office Word + Data on Windows....so why omv on debian? Rethink about your Software and Backupstrategy at first...


    A second solution with a tiny footprint may be


    #tar cJPf /var/backups/$(date +%F)-$(hostname)-etc.tar.z /etc/
    #tar cJPf /var/backups/$(date +%F)-$(hostname)-etc-omv.tar.z /etc/openmediavault/


    as a daily or weekly cronjob. This will save 99,9% of system configs and help on misshaps during config via web.
    But this one do not save nessesary File System Structures so mounts may fail on recreation.
    Use it as a Library to compare changes in config files, suspicious configs and so on. You can do this backup with the cron-plugin. Use a second Disk as Datastore for your backups. Use both Methods and you never lost any config data. And last one... never trust yourself - test the backups before emergency happens! The loop device is your friend!
    You can use rsync local and via net for this too....
    My last attempt with fsarchiver gone wrong because the fsarchiver on Systemrecuedisk can't read encrypted Data form fsarchiver on debian... don't no why... but one more reason for dd.

    omv will never take care of the whole system, it only cares about where it was installed and configured for.
    nginx as example is managed by the plugin omv-nginx. So uninstall omv-nginx and set the web interface to port 81, then port 80 is free and not used. If you do not use https for the web interface, port 443 will not be used. So omv don't overrides these ports again and you can use the ports by own. This also applies to many other omv functions.
    So you can as example turn off the network management of omv by removing any network cards in omv.
    However, then you have to configure it yourself (eg via Network Manager) or you can do it by hand on the shell. All configs can be changed with the thought approach.
    But you should think twice, if you complain at the forum because something does not work.
    Now you can simply put on logical ip addresses by reading debian guides.
    But everything you do to omv will do omv merciless and override these configs. Its a verry thin line between auto config like omv and configuring by hand. You can't have both sides. But you can even configure a base system im omv, remove omv and in the future do all things by hand. it's not easy but it's possible.
    However, if you rely on the knowledge of omv, you have no chance to do it yourself. I use omv out of laziness, not because I can't.


    But there is a third way... a very bad and dark way... never complain about this... you are on your own ... i can only wisper it.. but...
    if you want deny changes by omv, fix the file with the command chattr +i ...
    lalala lalalalala.... :)

    Ist die Frage noch aktuell?
    ich könnte dir helfen aber es ist fraglich ob und was es dir bringt.
    So wie man eine NIC mit 5 Adressen belegen kann, kann man auch 5 Nics mit 5 Adressen belegen... wenn die aber alle in einem Netz / CSMA-CD bzw. an einem Switch hängen, bringt das wenig. Mit 2 Nics macht manchmal Bonding noch Sinn...
    Der Umstieg von 100Mbit auf Gigabit Technik bewirkt mehr in Sachen Durchsatz... wenn du aber mit dem ganzen Geraffel an einer lahmen DSL Leitung hängst... oder dein Switch überfordert ist, nutzt auch das wenig.
    Daher erst mal die Frage... welche Infrastruktur hast du da? 100 oder 1000Mbit, für was brauchst du unterschiedliche IPs und hast du schon mal Docker mit Bridge oder Host oder Macvlan aufgesetzt?
    Bist du zumindest mit dem Aufsetzen einer Netzwerkkarte in debian vertraut?
    Kannst du eine Shell bedienen?
    Und auch nicht unwichtig.. was ist das für eine Netzwerkkarte? 4 echte LAN Chips? Oder ein Chip mit 4 Switchports?
    Was sagt die Ausgabe von cat /proc/net/dev

    Code
    # mdadm --details --scan
    mdadm: unrecognized option '--details'
    Usage: mdadm --help
      for help

    do nothing..
    do you mean


    Code
    # mdadm --detail --scan
    ARRAY /dev/md/raid metadata=1.2 name=nas:raid UUID=264e1924:4cba2353:1a44a0f8:6fd1dda6

    or

    Code
    # mdadm --examine --scan
    ARRAY /dev/md/raid  metadata=1.2 UUID=264e1924:4cba2353:1a44a0f8:6fd1dda6 name=nas:raid

    or

    I'm confused now. You keep mentioning array names or mdadm labels (--name argument in mdadm command I assume you are referring to?) and the link you posted is referring to a filesystem label. Big difference. I have used filesystem labels since the beginning and if you create a filesystem on an array using the the OMV web interface, the filesystem is giving the label that you type in the Label field. Even in the future, I see no reason to give an array a name. The array is rebuilt by signatures on the drives not the name and the filesystem on the array is mounted by the label or uuid.

    hmm...
    maybe there is more research nessesary.
    im shure i renamed the Filesystem Label with # e2label in the last Days.
    And i think this changed the output of # mdadm --detail --scan >> /etc/mdadm/mdadm.conf and this results in the error of omv-mdadm script. "/usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator"
    this is my actual work thesis. Completely described above.
    User geaves told me to think about renaming /dev/md/md0 to /dev/md/raid in the past... and i did the renaming.. with e2label.
    I know, its not a good idea to change the so called Disklabel... which hold partiton informations, raid informations....but this is written by mdadm and fdisk... and i am shure, i dont change this.
    Maybe you take a look at the first posts and tell me whats wrong?