Beiträge von Rd65

    use
    #apt install logwatch
    and
    #logwatch --output mail --format html --detail med --range 'between -7 days and -1 days'
    as example
    or
    #logwatch --detail low --range 'between 4/23/2019 and 4/30/2019'|less
    ..

    Try to decompose your Problem by separating Net and Raid and testing them separately.
    Use iperf for net trafic tests and use dd localy for big files on raid.
    check at firt your raid:
    #dd if=/dev/zero of=/srv/path-to-raid/sample1G.txt bs=1G count=1
    and
    #dd if=/dev/zero of=/srv/path-to-raid/sample10G.txt bs=1G count=10 status=progress
    and
    #dd if=/dev/zero of=/srv/path-to-raid/sample100G.txt bs=1G count=100 status=progress
    this creates 3 files in 3 sizes, 1G ,10G and 100G.
    so you can check if the raid is working.
    then... read about testing iperf...https://iperf.fr/ or https://www.linode.com/docs/ne…e-network-speed-in-linux/ as example.

    the easyest and savest way to do backups ... dosn't matter which *nix or omv system running .. is dd ... as used in omv-backup.
    this is 40 years of proven backup strategy... and it works just in 40 years... you can do it on shell, by any Linux live Distro, from cd or usbbootstick or pxe, with the backup web plugin, clonezilla and acronis do the same...



    #dd if=/dev/sda1 | gzip > /tmp/image.gz
    #gunzip -c /tmp/image.gz | dd of=/dev/sda1
    this is all you need to do full working backup and restore - on every Linux System. The first backups, the second restores.
    If you do a init 1 in the ssh shell before you dd your root disk, you are on the save way aganist open/variant files.


    even you can backup/mirror between Servers...
    #dd if=/dev/sda|gzip| ssh root@target 'dd of=/tmp/image.gz'
    #dd if=/dev/sda|bzip2 -c| ssh username@servername.net "bzip2 -d > dd of=/dev/sdb"


    It is not a good idea to backup only specific Software like omv... you will nerver try to backup single Office Word + Data on Windows....so why omv on debian? Rethink about your Software and Backupstrategy at first...


    A second solution with a tiny footprint may be


    #tar cJPf /var/backups/$(date +%F)-$(hostname)-etc.tar.z /etc/
    #tar cJPf /var/backups/$(date +%F)-$(hostname)-etc-omv.tar.z /etc/openmediavault/


    as a daily or weekly cronjob. This will save 99,9% of system configs and help on misshaps during config via web.
    But this one do not save nessesary File System Structures so mounts may fail on recreation.
    Use it as a Library to compare changes in config files, suspicious configs and so on. You can do this backup with the cron-plugin. Use a second Disk as Datastore for your backups. Use both Methods and you never lost any config data. And last one... never trust yourself - test the backups before emergency happens! The loop device is your friend!
    You can use rsync local and via net for this too....
    My last attempt with fsarchiver gone wrong because the fsarchiver on Systemrecuedisk can't read encrypted Data form fsarchiver on debian... don't no why... but one more reason for dd.

    omv will never take care of the whole system, it only cares about where it was installed and configured for.
    nginx as example is managed by the plugin omv-nginx. So uninstall omv-nginx and set the web interface to port 81, then port 80 is free and not used. If you do not use https for the web interface, port 443 will not be used. So omv don't overrides these ports again and you can use the ports by own. This also applies to many other omv functions.
    So you can as example turn off the network management of omv by removing any network cards in omv.
    However, then you have to configure it yourself (eg via Network Manager) or you can do it by hand on the shell. All configs can be changed with the thought approach.
    But you should think twice, if you complain at the forum because something does not work.
    Now you can simply put on logical ip addresses by reading debian guides.
    But everything you do to omv will do omv merciless and override these configs. Its a verry thin line between auto config like omv and configuring by hand. You can't have both sides. But you can even configure a base system im omv, remove omv and in the future do all things by hand. it's not easy but it's possible.
    However, if you rely on the knowledge of omv, you have no chance to do it yourself. I use omv out of laziness, not because I can't.


    But there is a third way... a very bad and dark way... never complain about this... you are on your own ... i can only wisper it.. but...
    if you want deny changes by omv, fix the file with the command chattr +i ...
    lalala lalalalala.... :)

    Ist die Frage noch aktuell?
    ich könnte dir helfen aber es ist fraglich ob und was es dir bringt.
    So wie man eine NIC mit 5 Adressen belegen kann, kann man auch 5 Nics mit 5 Adressen belegen... wenn die aber alle in einem Netz / CSMA-CD bzw. an einem Switch hängen, bringt das wenig. Mit 2 Nics macht manchmal Bonding noch Sinn...
    Der Umstieg von 100Mbit auf Gigabit Technik bewirkt mehr in Sachen Durchsatz... wenn du aber mit dem ganzen Geraffel an einer lahmen DSL Leitung hängst... oder dein Switch überfordert ist, nutzt auch das wenig.
    Daher erst mal die Frage... welche Infrastruktur hast du da? 100 oder 1000Mbit, für was brauchst du unterschiedliche IPs und hast du schon mal Docker mit Bridge oder Host oder Macvlan aufgesetzt?
    Bist du zumindest mit dem Aufsetzen einer Netzwerkkarte in debian vertraut?
    Kannst du eine Shell bedienen?
    Und auch nicht unwichtig.. was ist das für eine Netzwerkkarte? 4 echte LAN Chips? Oder ein Chip mit 4 Switchports?
    Was sagt die Ausgabe von cat /proc/net/dev

    Code
    # mdadm --details --scan
    mdadm: unrecognized option '--details'
    Usage: mdadm --help
      for help

    do nothing..
    do you mean


    Code
    # mdadm --detail --scan
    ARRAY /dev/md/raid metadata=1.2 name=nas:raid UUID=264e1924:4cba2353:1a44a0f8:6fd1dda6

    or

    Code
    # mdadm --examine --scan
    ARRAY /dev/md/raid  metadata=1.2 UUID=264e1924:4cba2353:1a44a0f8:6fd1dda6 name=nas:raid

    or

    I'm confused now. You keep mentioning array names or mdadm labels (--name argument in mdadm command I assume you are referring to?) and the link you posted is referring to a filesystem label. Big difference. I have used filesystem labels since the beginning and if you create a filesystem on an array using the the OMV web interface, the filesystem is giving the label that you type in the Label field. Even in the future, I see no reason to give an array a name. The array is rebuilt by signatures on the drives not the name and the filesystem on the array is mounted by the label or uuid.

    hmm...
    maybe there is more research nessesary.
    im shure i renamed the Filesystem Label with # e2label in the last Days.
    And i think this changed the output of # mdadm --detail --scan >> /etc/mdadm/mdadm.conf and this results in the error of omv-mdadm script. "/usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator"
    this is my actual work thesis. Completely described above.
    User geaves told me to think about renaming /dev/md/md0 to /dev/md/raid in the past... and i did the renaming.. with e2label.
    I know, its not a good idea to change the so called Disklabel... which hold partiton informations, raid informations....but this is written by mdadm and fdisk... and i am shure, i dont change this.
    Maybe you take a look at the first posts and tell me whats wrong?

    Naaa.. I' m interested about Opinions to this and i want to know if other People have same Problems. To file an issue or a pull request ist step 2. We are on step 1.

    Correct but OMV doesn't use raid names. So, why would it matter? I have used mdadm for over 15 years and never cared if the array had a name.

    Thats the point.
    On old systems without UDEV we had fixed, unchanging, well working names for devices.
    Since UDEV we need UIDs, Volume names and so on.
    15 Years ago noboddy got the idea to change volume names...
    But today and in the Future, it become more and more Important.
    As a exmaple:
    https://support.clustrix.com/h…ume-persists-after-reboot
    So not using mdadm Labels or deny using mdadm Labels is a Level 8 Error by Design!

    I disagree. OMV relies on many things to be built from the web interface. OMV's intended target audience (home users) have little need to create the array from the command line.

    I disagree. The intended Audience is only a "marketing Decission". Thats ok but thats no Apology to break Linux standard handling.
    Ok it is my "Fault" to rename Raids as described in Linux Tutorials ... and it is your right to declare all things not minded by omv developers as "nonstandard" and not desirable.


    A very limited view.
    I found the reason for this behave of omv and your reaction is... eyes shut and "we have no mistakes"
    Ok believe it.. but you are wrong!
    OMV Supports naming of Datavolumes... but not on Raids. Thats the correct description!
    If this is realy a "design decission" by Volker.. ok well...

    Sorry but omv is not an operating system nor a System Driver but only a web interface with a few scripts.
    So the right way to build and move Arrays depend on mdadm and NOT on omv.


    But I have probably found the reason for this Problem in my bash History.
    i renamed the name of the Raid in the last Days by # e2label /dev/md0 RAID
    truely a legal operation..!
    And this may changed correctly the mdadm.conf.
    The whole System accept that change including initramfs, grub, mounts ans so on... and can work with it...


    Exception: omv-mkconf mdadm


    And you tell me, this is not a omv Bug? Haha..
    Try it out...
    But thank you, helping me to find these...

    Ok, we're just going around in circles as you give me more information, have you simply tried renaming mdadm.conf file then running omv-mkconf mdadm.

    No. Not knowingly. I moved the Array from a old installation and hardware (debian,webmin) to a new hardware and after that renewed the omv base installation min 2 times. To get the right parameters for the Array in mdadm.conf i always use # mdadm --detail --scan >> /etc/mdadm/mdadm.conf to get the right parameters - because mdadm is able reconstruct its configs from the partitondatas - in contrast to hardware raids. The Array made never any Problems with that, so i had no need to maniulate mdadm configs. The omv-mkconf mdadm fails... this is no circle. This is straight a Bug in the script.

    The Array is shown as /dev/md127 in omv Filesystems web page and by ls /dev/md*
    but the Array is controlled by /dev/md/raid - so these all work


    mdadm --stop /dev/md/raid
    mdadm --detail /dev/md/raid
    and so on...


    the raid is mounten as:
    /dev/md127 on /srv/dev-disk-by-id-md-name-nas-raid type ext4 (rw,noexec,relatime,stripe=384,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)


    as i sayed ... the Array works well... only omv-mkconf mdadm, witch is called by omv-inisystem and nessesary by all hardwarechanges compalins about my raid config. And omv-initsystem is used after installation changes and after new installations to update the omv config.xml from System Values. This is common.

    The output of mdstat added on the last post as addition...

    Code
    # mdadm --detail --scan --verbose
    ARRAY /dev/md/raid level=raid5 num-devices=4 metadata=1.2 name=nas:raid UUID=264e1924:4cba2353:1a44a0f8:6fd1dda6
       devices=/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg

    The Array Parts are listet as md127 but the Array itself named as "raid" ... not md0 ... shown above too.


    Again the question, how to rename "raid" to "md0" ?
    on my Knowlege these are UDEV Problems... well known on renamed network cards eth0>enp5S0 and Disk names like /dev/sdX to what ever...
    I want to change the Name of /dev/md/raid to /dev/md/md0 or whatever fits best.. BUT HOW?
    i never played around in these UDEV configs .. and i want my md0 back! Or someone who cares about the omv-mdadm script!

    Ich kann dir da aus der ferne nicht weiter helfen aber Job Nr. 1 wäre, da Ordnug rein zu bringen - doppelte IPs gehen mal garnicht.
    --
    I can not help you from afar but job # 1 would be to bring in order in to your Network - double IPs do not work at all.

    Why are you running this anyway as the rest of your post gives no indication there is a problem.

    Because each array has a definition where /dev/md? with the ? being a number. Was the raid created using OMV, what does Raid Management show you in the GUI.

    omv-mkconf mdadm is part of omv-initsystem. i changed network cards, so i must reconfigure things.


    Code
    /usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator


    This is the Error Message as written. lsblk shows:


    and

    Code
    # ls /dev/md/
    raid

    in the first times this was md0 but i don't no why... maybe on kernel change to pve... it changed to "raid".
    it works as expected... but omv-mkconf mdadm struggles...
    maybe this depends on strange/broken udev rules but there is no rule or reason NOT to name a Raid like "raid".


    If this is a Problem for omv-mkconf mdadm, please tell me a Way to rename /dev/md/raid back to /dev/md/md0


    Thank you


    Addition for clarification: