mdadm on omv 4.1.27-1

  • Hi,
    i get an Error on omv-mkconf. This is my Raid on omv 4.1.27-1.

    I use

    Code
    # mdadm --detail --scan >> /etc/mdadm/mdadm.conf

    to update the config as described in details on https://wiki.archlinux.org/ind…Update_configuration_file and others
    (on debian the path is /etc/mdadm/mdadm.conf, not /etc/mdadm.conf)


    on the next Step i use this and get

    Code
    # omv-mkconf mdadm
    
    
    /usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator
    update-initramfs: Generating /boot/initrd.img-4.15.18-21-pve

    But the file /etc/mdadm/mdadm.conf contains:

    Code
    # grep -o '^[^#]*' /etc/mdadm/mdadm.conf 
    DEVICE partitions
    CREATE owner=root group=disk mode=0660 auto=yes
    HOMEHOST <system>
    ARRAY /dev/md/raid metadata=1.2 name=nas:raid UUID=264e1924:4cba2353:1a44a0f8:6fd1dda6
    MAILADDR mailaddy@host
    MAILFROM root

    So why is "omv-mkconf mdadm" complaining about a operator error at the Array definition? And at which operator? Why is something wrong with this?

    Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.


    When solving problems, dig at the root instead of hacking at the leaves.

    • Offizieller Beitrag

    i get an Error on omv-mkconf.

    Why are you running this anyway as the rest of your post gives no indication there is a problem.


    So why is "omv-mkconf mdadm" complaining about a operator error at the Array definition? And at which operator?

    Because each array has a definition where /dev/md? with the ? being a number. Was the raid created using OMV, what does Raid Management show you in the GUI.

  • Why are you running this anyway as the rest of your post gives no indication there is a problem.

    Because each array has a definition where /dev/md? with the ? being a number. Was the raid created using OMV, what does Raid Management show you in the GUI.

    omv-mkconf mdadm is part of omv-initsystem. i changed network cards, so i must reconfigure things.


    Code
    /usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator


    This is the Error Message as written. lsblk shows:


    and

    Code
    # ls /dev/md/
    raid

    in the first times this was md0 but i don't no why... maybe on kernel change to pve... it changed to "raid".
    it works as expected... but omv-mkconf mdadm struggles...
    maybe this depends on strange/broken udev rules but there is no rule or reason NOT to name a Raid like "raid".


    If this is a Problem for omv-mkconf mdadm, please tell me a Way to rename /dev/md/raid back to /dev/md/md0


    Thank you


    Addition for clarification:


    Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.


    When solving problems, dig at the root instead of hacking at the leaves.

    3 Mal editiert, zuletzt von Rd65 ()

    • Offizieller Beitrag

    i changed network cards, so i must reconfigure things.

    No, all you need to do is to reconfigure the network card!


    I have moved a working OMV (with software raid) from a big old server box to a smaller server box, transferred all the drives but did not connect started the new server, ran omv-firstaid to configure the NIC, reboot to confirm, shutdown connected all the drives and restarted, that was it no other configuration.


    Look at the output of your lsblk your raid is being referenced as md127, what's the output of cat /proc/mdstat and mdadm --detail --scan --verbose

  • The output of mdstat added on the last post as addition...

    Code
    # mdadm --detail --scan --verbose
    ARRAY /dev/md/raid level=raid5 num-devices=4 metadata=1.2 name=nas:raid UUID=264e1924:4cba2353:1a44a0f8:6fd1dda6
       devices=/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg

    The Array Parts are listet as md127 but the Array itself named as "raid" ... not md0 ... shown above too.


    Again the question, how to rename "raid" to "md0" ?
    on my Knowlege these are UDEV Problems... well known on renamed network cards eth0>enp5S0 and Disk names like /dev/sdX to what ever...
    I want to change the Name of /dev/md/raid to /dev/md/md0 or whatever fits best.. BUT HOW?
    i never played around in these UDEV configs .. and i want my md0 back! Or someone who cares about the omv-mdadm script!

    Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.


    When solving problems, dig at the root instead of hacking at the leaves.

  • The Array is shown as /dev/md127 in omv Filesystems web page and by ls /dev/md*
    but the Array is controlled by /dev/md/raid - so these all work


    mdadm --stop /dev/md/raid
    mdadm --detail /dev/md/raid
    and so on...


    the raid is mounten as:
    /dev/md127 on /srv/dev-disk-by-id-md-name-nas-raid type ext4 (rw,noexec,relatime,stripe=384,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)


    as i sayed ... the Array works well... only omv-mkconf mdadm, witch is called by omv-inisystem and nessesary by all hardwarechanges compalins about my raid config. And omv-initsystem is used after installation changes and after new installations to update the omv config.xml from System Values. This is common.

    Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.


    When solving problems, dig at the root instead of hacking at the leaves.

    • Offizieller Beitrag

    Ok, we're just going around in circles as you give me more information, have you simply tried renaming mdadm.conf file then running omv-mkconf mdadm.


    And omv-initsystem is used after installation changes and after new installations to update the omv config.xml from System Values.

    I know this, but simply changing a NIC does not affect this sort of change, as I stated in my post 4 I've done this myself, but to do it and ensure that it went without a hitch I did not connect any data drives, just OMV's boot drive.

  • Ok, we're just going around in circles as you give me more information, have you simply tried renaming mdadm.conf file then running omv-mkconf mdadm.

    No. Not knowingly. I moved the Array from a old installation and hardware (debian,webmin) to a new hardware and after that renewed the omv base installation min 2 times. To get the right parameters for the Array in mdadm.conf i always use # mdadm --detail --scan >> /etc/mdadm/mdadm.conf to get the right parameters - because mdadm is able reconstruct its configs from the partitondatas - in contrast to hardware raids. The Array made never any Problems with that, so i had no need to maniulate mdadm configs. The omv-mkconf mdadm fails... this is no circle. This is straight a Bug in the script.

    Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.


    When solving problems, dig at the root instead of hacking at the leaves.

    • Offizieller Beitrag

    This is straight a Bug in the script.

    Kind of. The output of mdadm --detail --scan should be in double quotes in line 99 - https://github.com/openmediava…diavault/mkconf/mdadm#L99 but not sure why this hasn't been an issue on other systems. What is the output on your system of: mdadm --detail --scan with sending the output to the conf file? Also, all of this code has been replaced with salt in OMV 5.x. So, this bug is effectively dead.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    This is straight a Bug in the script.

    There's no bug in the script! this has been used a number of times with users I have helped on the forum.


    To get the right parameters for the Array in mdadm.conf i always use # mdadm --detail --scan >> /etc/mdadm/mdadm.conf to get the right parameters

    ?( this is not the way to create an mdadm conf in OMV.


    I moved the Array from a old installation and hardware (debian,webmin) to a new hardware and after that renewed the omv base installation min 2 times.

    There is no need to 'renew' anything, starting OMV on new hardware is simply just connecting the boot device with no drives attached, configure the network plug the drives in and the whole system continues to work.


    As I said previously the only option I can think of to correct this is to rename the mdadm conf and then run omv-mkconf mdadm

  • Sorry but omv is not an operating system nor a System Driver but only a web interface with a few scripts.
    So the right way to build and move Arrays depend on mdadm and NOT on omv.


    But I have probably found the reason for this Problem in my bash History.
    i renamed the name of the Raid in the last Days by # e2label /dev/md0 RAID
    truely a legal operation..!
    And this may changed correctly the mdadm.conf.
    The whole System accept that change including initramfs, grub, mounts ans so on... and can work with it...


    Exception: omv-mkconf mdadm


    And you tell me, this is not a omv Bug? Haha..
    Try it out...
    But thank you, helping me to find these...

    Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.


    When solving problems, dig at the root instead of hacking at the leaves.

    • Offizieller Beitrag

    So the right way to build and move Arrays depend on mdadm and NOT on omv.

    I disagree. OMV relies on many things to be built from the web interface. OMV's intended target audience (home users) have little need to create the array from the command line.


    But I have probably found the reason for this Problem in my bash History.
    i renamed the name of the Raid in the last Days by # e2label /dev/md0 RAID
    truely a legal operation..!

    Legal on Linux, yes but OMV uses the filesystem label for many things. If you change the label, it needs to be updated in quite a few places. Overall, I would tell people to never change a filesystem label once you have started using that filesystem in the OMV web interface.


    And you tell me, this is not a omv Bug? Haha..
    Try it out...

    From OMV's perspective, it is not a bug since you are doing something it doesn't support. When Volker made the decision to have OMV completely control some aspects of the system, it does break the ability to do a few things from the command line. This is a design decision not a bug.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I disagree. OMV relies on many things to be built from the web interface. OMV's intended target audience (home users) have little need to create the array from the command line.

    I disagree. The intended Audience is only a "marketing Decission". Thats ok but thats no Apology to break Linux standard handling.
    Ok it is my "Fault" to rename Raids as described in Linux Tutorials ... and it is your right to declare all things not minded by omv developers as "nonstandard" and not desirable.


    A very limited view.
    I found the reason for this behave of omv and your reaction is... eyes shut and "we have no mistakes"
    Ok believe it.. but you are wrong!
    OMV Supports naming of Datavolumes... but not on Raids. Thats the correct description!
    If this is realy a "design decission" by Volker.. ok well...

    Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.


    When solving problems, dig at the root instead of hacking at the leaves.

    • Offizieller Beitrag

    it is your right to declare all things not minded by omv developers as "nonstandard" and not desirable.

    There is only one OMV developer (not me) and I was just stating my opinion with respect to OMV (not Linux). If you don't like it, file an issue or a pull request - https://github.com/OpenMediaVault/openmediavault


    I found the reason for this behave of omv and your reaction is... eyes shut and "we have no mistakes"

    Uh, no. I said ONE thing was not a bug and is a design decision. I know there are lots of bugs including my code that I have written for the plugins. Nowhere did I state what you are saying I did. So, don't put words in my mouth.


    Ok believe it.. but you are wrong!

    I didn't know my opinion could be wrong???


    OMV Supports naming of Datavolumes... but not on Raids. Thats the correct description!

    Correct but OMV doesn't use raid names. So, why would it matter? I have used mdadm for over 15 years and never cared if the array had a name.


    If this is realy a "design decission" by Volker.. ok well...

    This is nothing new. OMV has been this way since the beginning.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Correct but OMV doesn't use raid names. So, why would it matter? I have used mdadm for over 15 years and never cared if the array had a name.

    Thats the point.
    On old systems without UDEV we had fixed, unchanging, well working names for devices.
    Since UDEV we need UIDs, Volume names and so on.
    15 Years ago noboddy got the idea to change volume names...
    But today and in the Future, it become more and more Important.
    As a exmaple:
    https://support.clustrix.com/h…ume-persists-after-reboot
    So not using mdadm Labels or deny using mdadm Labels is a Level 8 Error by Design!

    Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.


    When solving problems, dig at the root instead of hacking at the leaves.

  • Naaa.. I' m interested about Opinions to this and i want to know if other People have same Problems. To file an issue or a pull request ist step 2. We are on step 1.

    Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.


    When solving problems, dig at the root instead of hacking at the leaves.

    • Offizieller Beitrag

    I'm confused now. You keep mentioning array names or mdadm labels (--name argument in mdadm command I assume you are referring to?) and the link you posted is referring to a filesystem label. Big difference. I have used filesystem labels since the beginning and if you create a filesystem on an array using the the OMV web interface, the filesystem is giving the label that you type in the Label field. Even in the future, I see no reason to give an array a name. The array is rebuilt by signatures on the drives not the name and the filesystem on the array is mounted by the label or uuid.


    Naaa.. I' m interested about Opinions to this and i want to know if other People have same Problems. To file an issue or a pull request ist step 2. We are on step 1.

    I have created many mdadm raid arrays on test systems in the OMV web interface and they survive reboots. Most OMV users have the same experience (unless they do dumb things like use USB drives). If this statement was wrong, you would probably see hundreds of threads about arrays not surviving reboots.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I'm confused now. You keep mentioning array names or mdadm labels (--name argument in mdadm command I assume you are referring to?) and the link you posted is referring to a filesystem label. Big difference. I have used filesystem labels since the beginning and if you create a filesystem on an array using the the OMV web interface, the filesystem is giving the label that you type in the Label field. Even in the future, I see no reason to give an array a name. The array is rebuilt by signatures on the drives not the name and the filesystem on the array is mounted by the label or uuid.

    hmm...
    maybe there is more research nessesary.
    im shure i renamed the Filesystem Label with # e2label in the last Days.
    And i think this changed the output of # mdadm --detail --scan >> /etc/mdadm/mdadm.conf and this results in the error of omv-mdadm script. "/usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator"
    this is my actual work thesis. Completely described above.
    User geaves told me to think about renaming /dev/md/md0 to /dev/md/raid in the past... and i did the renaming.. with e2label.
    I know, its not a good idea to change the so called Disklabel... which hold partiton informations, raid informations....but this is written by mdadm and fdisk... and i am shure, i dont change this.
    Maybe you take a look at the first posts and tell me whats wrong?

    Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.


    When solving problems, dig at the root instead of hacking at the leaves.

    • Offizieller Beitrag

    renaming /dev/md/md0 to /dev/md/raid in the past... and i did the renaming.. with e2label.

    e2label will not rename an mdadm array. It will only change the filesystem label of an ext2/3/4 filesystem. If your array had an xfs filesystem on it, that command would've failed/done nothing.


    And i think this changed the output of # mdadm --detail --scan >> /etc/mdadm/mdadm.conf and this results in the error of omv-mdadm script. "/usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator"

    The only way that unexpector operator message happens is if mdadm --detail --scan returns nothing but you have any array. So, it should return something. If the line was if [ -z "$(mdadm --detail --scan)" ]; then, it would work even if the command returned nothing. As I said earlier, this is kind of a bug but the command should return something unless your array is not configured correctly.

    I know, its not a good idea to change the so called Disklabel... which hold partiton informations, raid informations....but this is written by mdadm and fdisk... and i am shure, i dont change this.
    Maybe you take a look at the first posts and tell me whats wrong?

    I tried. You didn't post the output of mdadm --detail --scan. If you changed the filesystem label, the easiest way to fix that is remove the shared folders associated with the filesystem and then unmount it in the filesystems tab. Otherwise, it will require editing /etc/openmediavault/config.xml and running a few omv-mkconf commands.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    Einmal editiert, zuletzt von ryecoaaron ()

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!