mdadm on omv 4.1.27-1

    • OMV 4.x
    • Resolved
    • mdadm on omv 4.1.27-1

      Hi,
      i get an Error on omv-mkconf. This is my Raid on omv 4.1.27-1.

      Source Code

      1. #mdadm --detail /dev/md/raid
      2. /dev/md/raid:
      3. Version : 1.2
      4. Creation Time : Thu Sep 12 20:34:42 2019
      5. Raid Level : raid5
      6. Array Size : 5860150272 (5588.67 GiB 6000.79 GB)
      7. Used Dev Size : 1953383424 (1862.89 GiB 2000.26 GB)
      8. Raid Devices : 4
      9. Total Devices : 4
      10. Persistence : Superblock is persistent
      11. Intent Bitmap : Internal
      12. Update Time : Sun Nov 10 06:00:55 2019
      13. State : clean
      14. Active Devices : 4
      15. Working Devices : 4
      16. Failed Devices : 0
      17. Spare Devices : 0
      18. Layout : left-symmetric
      19. Chunk Size : 512K
      20. Name : nas:raid (local to host nas)
      21. UUID : 264e1924:4cba2353:1a44a0f8:6fd1dda6
      22. Events : 3765
      23. Number Major Minor RaidDevice State
      24. 0 8 48 0 active sync /dev/sdd
      25. 1 8 64 1 active sync /dev/sde
      26. 2 8 80 2 active sync /dev/sdf
      27. 3 8 96 3 active sync /dev/sdg
      Display All
      I use

      Source Code

      1. # mdadm --detail --scan >> /etc/mdadm/mdadm.conf
      to update the config as described in details on wiki.archlinux.org/index.php/RAID#Update_configuration_file and others
      (on debian the path is /etc/mdadm/mdadm.conf, not /etc/mdadm.conf)

      on the next Step i use this and get

      Source Code

      1. # omv-mkconf mdadm
      2. /usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator
      3. update-initramfs: Generating /boot/initrd.img-4.15.18-21-pve
      But the file /etc/mdadm/mdadm.conf contains:

      Source Code

      1. # grep -o '^[^#]*' /etc/mdadm/mdadm.conf
      2. DEVICE partitions
      3. CREATE owner=root group=disk mode=0660 auto=yes
      4. HOMEHOST <system>
      5. ARRAY /dev/md/raid metadata=1.2 name=nas:raid UUID=264e1924:4cba2353:1a44a0f8:6fd1dda6
      6. MAILADDR mailaddy@host
      7. MAILFROM root
      So why is "omv-mkconf mdadm" complaining about a operator error at the Array definition? And at which operator? Why is something wrong with this?
      Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.

      When solving problems, dig at the root instead of hacking at the leaves.
    • Rd65 wrote:

      i get an Error on omv-mkconf.
      Why are you running this anyway as the rest of your post gives no indication there is a problem.

      Rd65 wrote:

      So why is "omv-mkconf mdadm" complaining about a operator error at the Array definition? And at which operator?
      Because each array has a definition where /dev/md? with the ? being a number. Was the raid created using OMV, what does Raid Management show you in the GUI.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      Rd65 wrote:

      i get an Error on omv-mkconf.
      Why are you running this anyway as the rest of your post gives no indication there is a problem.

      Rd65 wrote:

      So why is "omv-mkconf mdadm" complaining about a operator error at the Array definition? And at which operator?
      Because each array has a definition where /dev/md? with the ? being a number. Was the raid created using OMV, what does Raid Management show you in the GUI.
      omv-mkconf mdadm is part of omv-initsystem. i changed network cards, so i must reconfigure things.

      Source Code

      1. /usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator


      This is the Error Message as written. lsblk shows:

      Source Code

      1. # lsblk
      2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      3. loop0 7:0 0 5G 1 loop /export/pxeboot/win10/Win10_1903_V2_German_x64
      4. loop1 7:1 0 3.6G 1 loop /export/pxeboot/win10/Win10_1903_V2_German_x32
      5. sda 8:0 0 111.8G 0 disk
      6. ├─sda1 8:1 0 97.7G 0 part /
      7. ├─sda2 8:2 0 1K 0 part
      8. └─sda5 8:5 0 14.1G 0 part [SWAP]
      9. sdb 8:16 0 465.8G 0 disk
      10. └─sdb1 8:17 0 465.8G 0 part /srv/dev-disk-by-label-BackupLW
      11. sdc 8:32 0 1.8T 0 disk
      12. └─sdc1 8:33 0 1.8T 0 part /srv/dev-disk-by-id-ata-ST2000DM001-1CH164_Z3407382-part1
      13. sdd 8:48 0 1.8T 0 disk
      14. └─md127 9:127 0 5.5T 0 raid5 /srv/dev-disk-by-id-md-name-nas-raid
      15. sde 8:64 0 1.8T 0 disk
      16. └─md127 9:127 0 5.5T 0 raid5 /srv/dev-disk-by-id-md-name-nas-raid
      17. sdf 8:80 0 1.8T 0 disk
      18. └─md127 9:127 0 5.5T 0 raid5 /srv/dev-disk-by-id-md-name-nas-raid
      19. sdg 8:96 0 1.8T 0 disk
      20. └─md127 9:127 0 5.5T 0 raid5 /srv/dev-disk-by-id-md-name-nas-raid
      21. sr0 11:0 1 1024M 0 rom
      Display All
      and

      Source Code

      1. # ls /dev/md/
      2. raid
      in the first times this was md0 but i don't no why... maybe on kernel change to pve... it changed to "raid".
      it works as expected... but omv-mkconf mdadm struggles...
      maybe this depends on strange/broken udev rules but there is no rule or reason NOT to name a Raid like "raid".

      If this is a Problem for omv-mkconf mdadm, please tell me a Way to rename /dev/md/raid back to /dev/md/md0

      Thank you

      Addition for clarification:

      Source Code

      1. # ls /dev/md*
      2. /dev/md127
      3. /dev/md:
      4. raid
      5. # cat /proc/mdstat
      6. Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
      7. md127 : active raid5 sdg[3] sdf[2] sde[1] sdd[0]
      8. 5860150272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      9. bitmap: 0/15 pages [0KB], 65536KB chunk
      10. unused devices: <none>
      Display All
      Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.

      When solving problems, dig at the root instead of hacking at the leaves.

      The post was edited 3 times, last by Rd65 ().

    • Rd65 wrote:

      i changed network cards, so i must reconfigure things.
      No, all you need to do is to reconfigure the network card!

      I have moved a working OMV (with software raid) from a big old server box to a smaller server box, transferred all the drives but did not connect started the new server, ran omv-firstaid to configure the NIC, reboot to confirm, shutdown connected all the drives and restarted, that was it no other configuration.

      Look at the output of your lsblk your raid is being referenced as md127, what's the output of cat /proc/mdstat and mdadm --detail --scan --verbose
      Raid is not a backup! Would you go skydiving without a parachute?
    • The output of mdstat added on the last post as addition...

      Source Code

      1. # mdadm --detail --scan --verbose
      2. ARRAY /dev/md/raid level=raid5 num-devices=4 metadata=1.2 name=nas:raid UUID=264e1924:4cba2353:1a44a0f8:6fd1dda6
      3. devices=/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg
      The Array Parts are listet as md127 but the Array itself named as "raid" ... not md0 ... shown above too.

      Again the question, how to rename "raid" to "md0" ?
      on my Knowlege these are UDEV Problems... well known on renamed network cards eth0>enp5S0 and Disk names like /dev/sdX to what ever...
      I want to change the Name of /dev/md/raid to /dev/md/md0 or whatever fits best.. BUT HOW?
      i never played around in these UDEV configs .. and i want my md0 back! Or someone who cares about the omv-mdadm script!
      Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.

      When solving problems, dig at the root instead of hacking at the leaves.
    • The Array is shown as /dev/md127 in omv Filesystems web page and by ls /dev/md*
      but the Array is controlled by /dev/md/raid - so these all work

      mdadm --stop /dev/md/raid
      mdadm --detail /dev/md/raid
      and so on...

      the raid is mounten as:
      /dev/md127 on /srv/dev-disk-by-id-md-name-nas-raid type ext4 (rw,noexec,relatime,stripe=384,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)

      as i sayed ... the Array works well... only omv-mkconf mdadm, witch is called by omv-inisystem and nessesary by all hardwarechanges compalins about my raid config. And omv-initsystem is used after installation changes and after new installations to update the omv config.xml from System Values. This is common.
      Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.

      When solving problems, dig at the root instead of hacking at the leaves.
    • Ok, we're just going around in circles as you give me more information, have you simply tried renaming mdadm.conf file then running omv-mkconf mdadm.

      Rd65 wrote:

      And omv-initsystem is used after installation changes and after new installations to update the omv config.xml from System Values.
      I know this, but simply changing a NIC does not affect this sort of change, as I stated in my post 4 I've done this myself, but to do it and ensure that it went without a hitch I did not connect any data drives, just OMV's boot drive.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      Ok, we're just going around in circles as you give me more information, have you simply tried renaming mdadm.conf file then running omv-mkconf mdadm.
      No. Not knowingly. I moved the Array from a old installation and hardware (debian,webmin) to a new hardware and after that renewed the omv base installation min 2 times. To get the right parameters for the Array in mdadm.conf i always use # mdadm --detail --scan >> /etc/mdadm/mdadm.conf to get the right parameters - because mdadm is able reconstruct its configs from the partitondatas - in contrast to hardware raids. The Array made never any Problems with that, so i had no need to maniulate mdadm configs. The omv-mkconf mdadm fails... this is no circle. This is straight a Bug in the script.
      Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.

      When solving problems, dig at the root instead of hacking at the leaves.
    • Rd65 wrote:

      This is straight a Bug in the script.
      Kind of. The output of mdadm --detail --scan should be in double quotes in line 99 - github.com/openmediavault/open…diavault/mkconf/mdadm#L99 but not sure why this hasn't been an issue on other systems. What is the output on your system of: mdadm --detail --scan with sending the output to the conf file? Also, all of this code has been replaced with salt in OMV 5.x. So, this bug is effectively dead.
      omv 5.1.0 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.5
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Rd65 wrote:

      This is straight a Bug in the script.
      There's no bug in the script! this has been used a number of times with users I have helped on the forum.

      Rd65 wrote:

      To get the right parameters for the Array in mdadm.conf i always use # mdadm --detail --scan >> /etc/mdadm/mdadm.conf to get the right parameters
      ?( this is not the way to create an mdadm conf in OMV.

      Rd65 wrote:

      I moved the Array from a old installation and hardware (debian,webmin) to a new hardware and after that renewed the omv base installation min 2 times.
      There is no need to 'renew' anything, starting OMV on new hardware is simply just connecting the boot device with no drives attached, configure the network plug the drives in and the whole system continues to work.

      As I said previously the only option I can think of to correct this is to rename the mdadm conf and then run omv-mkconf mdadm
      Raid is not a backup! Would you go skydiving without a parachute?
    • Sorry but omv is not an operating system nor a System Driver but only a web interface with a few scripts.
      So the right way to build and move Arrays depend on mdadm and NOT on omv.

      But I have probably found the reason for this Problem in my bash History.
      i renamed the name of the Raid in the last Days by # e2label /dev/md0 RAID
      truely a legal operation..!
      And this may changed correctly the mdadm.conf.
      The whole System accept that change including initramfs, grub, mounts ans so on... and can work with it...

      Exception: omv-mkconf mdadm

      And you tell me, this is not a omv Bug? Haha..
      Try it out...
      But thank you, helping me to find these...
      Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.

      When solving problems, dig at the root instead of hacking at the leaves.
    • Rd65 wrote:

      So the right way to build and move Arrays depend on mdadm and NOT on omv.
      I disagree. OMV relies on many things to be built from the web interface. OMV's intended target audience (home users) have little need to create the array from the command line.

      Rd65 wrote:

      But I have probably found the reason for this Problem in my bash History.
      i renamed the name of the Raid in the last Days by # e2label /dev/md0 RAID
      truely a legal operation..!
      Legal on Linux, yes but OMV uses the filesystem label for many things. If you change the label, it needs to be updated in quite a few places. Overall, I would tell people to never change a filesystem label once you have started using that filesystem in the OMV web interface.

      Rd65 wrote:

      And you tell me, this is not a omv Bug? Haha..
      Try it out...
      From OMV's perspective, it is not a bug since you are doing something it doesn't support. When Volker made the decision to have OMV completely control some aspects of the system, it does break the ability to do a few things from the command line. This is a design decision not a bug.
      omv 5.1.0 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.5
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      Rd65 wrote:

      So the right way to build and move Arrays depend on mdadm and NOT on omv.
      I disagree. OMV relies on many things to be built from the web interface. OMV's intended target audience (home users) have little need to create the array from the command line.
      I disagree. The intended Audience is only a "marketing Decission". Thats ok but thats no Apology to break Linux standard handling.
      Ok it is my "Fault" to rename Raids as described in Linux Tutorials ... and it is your right to declare all things not minded by omv developers as "nonstandard" and not desirable.

      A very limited view.
      I found the reason for this behave of omv and your reaction is... eyes shut and "we have no mistakes"
      Ok believe it.. but you are wrong!
      OMV Supports naming of Datavolumes... but not on Raids. Thats the correct description!
      If this is realy a "design decission" by Volker.. ok well...
      Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.

      When solving problems, dig at the root instead of hacking at the leaves.
    • Rd65 wrote:

      it is your right to declare all things not minded by omv developers as "nonstandard" and not desirable.
      There is only one OMV developer (not me) and I was just stating my opinion with respect to OMV (not Linux). If you don't like it, file an issue or a pull request - github.com/OpenMediaVault/openmediavault

      Rd65 wrote:

      I found the reason for this behave of omv and your reaction is... eyes shut and "we have no mistakes"
      Uh, no. I said ONE thing was not a bug and is a design decision. I know there are lots of bugs including my code that I have written for the plugins. Nowhere did I state what you are saying I did. So, don't put words in my mouth.

      Rd65 wrote:

      Ok believe it.. but you are wrong!
      I didn't know my opinion could be wrong???

      Rd65 wrote:

      OMV Supports naming of Datavolumes... but not on Raids. Thats the correct description!
      Correct but OMV doesn't use raid names. So, why would it matter? I have used mdadm for over 15 years and never cared if the array had a name.

      Rd65 wrote:

      If this is realy a "design decission" by Volker.. ok well...
      This is nothing new. OMV has been this way since the beginning.
      omv 5.1.0 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.5
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:



      Rd65 wrote:

      OMV Supports naming of Datavolumes... but not on Raids. Thats the correct description!
      Correct but OMV doesn't use raid names. So, why would it matter? I have used mdadm for over 15 years and never cared if the array had a name.
      Thats the point.
      On old systems without UDEV we had fixed, unchanging, well working names for devices.
      Since UDEV we need UIDs, Volume names and so on.
      15 Years ago noboddy got the idea to change volume names...
      But today and in the Future, it become more and more Important.
      As a exmaple:
      support.clustrix.com/hc/en-us/…ume-persists-after-reboot
      So not using mdadm Labels or deny using mdadm Labels is a Level 8 Error by Design!
      Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.

      When solving problems, dig at the root instead of hacking at the leaves.
    • Naaa.. I' m interested about Opinions to this and i want to know if other People have same Problems. To file an issue or a pull request ist step 2. We are on step 1.
      Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.

      When solving problems, dig at the root instead of hacking at the leaves.
    • Rd65 wrote:

      Thats the point.
      On old systems without UDEV we had fixed, unchanging, well working names for devices.
      Since UDEV we need UIDs, Volume names and so on.
      15 Years ago noboddy got the idea to change volume names...
      But today and in the Future, it become more and more Important.
      As a exmaple:
      support.clustrix.com/hc/en-us/…ume-persists-after-reboot
      So not using mdadm Labels or deny using mdadm Labels is a Level 8 Error by Design!
      I'm confused now. You keep mentioning array names or mdadm labels (--name argument in mdadm command I assume you are referring to?) and the link you posted is referring to a filesystem label. Big difference. I have used filesystem labels since the beginning and if you create a filesystem on an array using the the OMV web interface, the filesystem is giving the label that you type in the Label field. Even in the future, I see no reason to give an array a name. The array is rebuilt by signatures on the drives not the name and the filesystem on the array is mounted by the label or uuid.

      Rd65 wrote:

      Naaa.. I' m interested about Opinions to this and i want to know if other People have same Problems. To file an issue or a pull request ist step 2. We are on step 1.
      I have created many mdadm raid arrays on test systems in the OMV web interface and they survive reboots. Most OMV users have the same experience (unless they do dumb things like use USB drives). If this statement was wrong, you would probably see hundreds of threads about arrays not surviving reboots.
      omv 5.1.0 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.5
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      I'm confused now. You keep mentioning array names or mdadm labels (--name argument in mdadm command I assume you are referring to?) and the link you posted is referring to a filesystem label. Big difference. I have used filesystem labels since the beginning and if you create a filesystem on an array using the the OMV web interface, the filesystem is giving the label that you type in the Label field. Even in the future, I see no reason to give an array a name. The array is rebuilt by signatures on the drives not the name and the filesystem on the array is mounted by the label or uuid.
      hmm...
      maybe there is more research nessesary.
      im shure i renamed the Filesystem Label with # e2label in the last Days.
      And i think this changed the output of # mdadm --detail --scan >> /etc/mdadm/mdadm.conf and this results in the error of omv-mdadm script. "/usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator"
      this is my actual work thesis. Completely described above.
      User geaves told me to think about renaming /dev/md/md0 to /dev/md/raid in the past... and i did the renaming.. with e2label.
      I know, its not a good idea to change the so called Disklabel... which hold partiton informations, raid informations....but this is written by mdadm and fdisk... and i am shure, i dont change this.
      Maybe you take a look at the first posts and tell me whats wrong?
      Equipment: a few Computers, lot's of waste heat, little time and a Pile of work.

      When solving problems, dig at the root instead of hacking at the leaves.
    • Rd65 wrote:

      renaming /dev/md/md0 to /dev/md/raid in the past... and i did the renaming.. with e2label.
      e2label will not rename an mdadm array. It will only change the filesystem label of an ext2/3/4 filesystem. If your array had an xfs filesystem on it, that command would've failed/done nothing.

      Rd65 wrote:

      And i think this changed the output of # mdadm --detail --scan >> /etc/mdadm/mdadm.conf and this results in the error of omv-mdadm script. "/usr/share/openmediavault/mkconf/mdadm: 99: [: ARRAY: unexpected operator"
      The only way that unexpector operator message happens is if mdadm --detail --scan returns nothing but you have any array. So, it should return something. If the line was if [ -z "$(mdadm --detail --scan)" ]; then, it would work even if the command returned nothing. As I said earlier, this is kind of a bug but the command should return something unless your array is not configured correctly.

      Rd65 wrote:

      I know, its not a good idea to change the so called Disklabel... which hold partiton informations, raid informations....but this is written by mdadm and fdisk... and i am shure, i dont change this.
      Maybe you take a look at the first posts and tell me whats wrong?
      I tried. You didn't post the output of mdadm --detail --scan. If you changed the filesystem label, the easiest way to fix that is remove the shared folders associated with the filesystem and then unmount it in the filesystems tab. Otherwise, it will require editing /etc/openmediavault/config.xml and running a few omv-mkconf commands.
      omv 5.1.0 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.5
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!

      The post was edited 1 time, last by ryecoaaron ().