"Missing" RAID filesystem

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • "Missing" RAID filesystem

      Hi,

      I've upgraded my OMV 3 to newest 4.0.14-1 Arrakis. After that I've noticed that filesystem of RAID 1 is missing in "Filesystem" tab:

      RAID1 labeled Matrix is mounted and in clean state:

      also I've checked that by ssh, I can see files on Matrix raid.
      Basic commands to troubleshoot problem:

      cat /proc/mdstat

      Source Code

      1. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
      2. md0 : active raid1 sde[1] sdd[0]
      3. 1953383488 blocks super 1.2 [2/2] [UU]
      4. bitmap: 0/15 pages [0KB], 65536KB chunk
      5. unused devices: <none>


      blkid

      Source Code

      1. root@secure:/# blkid
      2. /dev/sda1: LABEL="CACHE" UUID="a08721ac-0b40-4d34-8508-7550b76a803d" TYPE="ext4" PARTUUID="893c809b-01"
      3. /dev/sdb2: UUID="d7446739-3636-4258-acef-4e44c1a374e2" TYPE="ext4" PARTUUID="64a507c7-361e-4c04-b4e2-aed8becdcb4c"
      4. /dev/sdc1: LABEL="DataSSD" UUID="c26c316d-7469-45f0-b509-58eb516654bd" TYPE="ext4" PARTUUID="bff87c46-2a18-401f-a560-9d4e24208c4d"
      5. /dev/sdd: UUID="df704e6d-b0ec-d791-5d09-09cdb0c6a6c3" UUID_SUB="17fdb3c3-ae58-f912-c38f-a3b06d0f7a24" LABEL="nas:Matrix" TYPE="linux_raid_member"
      6. /dev/sde: UUID="df704e6d-b0ec-d791-5d09-09cdb0c6a6c3" UUID_SUB="a28b7c92-e3ed-1bbb-0467-55016d663085" LABEL="nas:Matrix" TYPE="linux_raid_member"
      7. /dev/sdb1: PARTUUID="0cb69419-e832-4e3f-8f2d-7fd377871fc4"


      fdisk -l | grep "Disk "

      Source Code

      1. root@secure:/# fdisk -l | grep "Disk "
      2. Disk /dev/sda: 477 GiB, 512110190592 bytes, 1000215216 sectors
      3. Disk identifier: 0x893c809b
      4. Disk /dev/sdb: 238.5 GiB, 256060514304 bytes, 500118192 sectors
      5. Disk identifier: 0EBBA207-0B19-4F27-8BEC-70088765E75B
      6. Disk /dev/sdc: 238.5 GiB, 256060514304 bytes, 500118192 sectors
      7. Disk identifier: 28024B9D-9A47-4130-9349-83B733347DCF
      8. Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
      9. Disk /dev/md0: 1.8 TiB, 2000264691712 bytes, 3906766976 sectors
      10. Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors

      cat /etc/mdadm/mdadm.conf

      Source Code

      1. root@secure:/# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md0 metadata=1.2 name=nas:Matrix UUID=df704e6d:b0ecd791:5d0909cd:b0c6a6c3
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR monitoring.backup.cloud@gmail.com
      20. MAILFROM root
      Display All

      mdadm --detail --scan --verbose

      Source Code

      1. root@secure:/# mdadm --detail --scan --verbose
      2. ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=nas:Matrix UUID=df704e6d:b0ecd791:5d0909cd:b0c6a6c3
      3. devices=/dev/sdd,/dev/sde

      uname -a

      Source Code

      1. root@secure:/# uname -a
      2. Linux secure 4.13.0-0.bpo.1-amd64 #1 SMP Debian 4.13.13-1~bpo9+1 (2017-11-22) x86_64 GNU/Linux

      omv-sysinfo

      Source Code

      1. = OS/Debian information
      2. ================================================================================
      3. No LSB modules are available.
      4. Distributor ID: Debian
      5. Description: Debian GNU/Linux 9.2 (stretch)
      6. Release: 9.2
      7. Codename: stretch
      8. ================================================================================
      9. = openmediavault information
      10. ================================================================================
      11. Release: 4.0.14-1
      12. Codename: Arrakis
      Display All


      Please help me to resolve this issue,
      thank you in advance for every reply!

      Mateusz

      The post was edited 1 time, last by mateuxp ().

    • tkaiser wrote:

      Which issue exactly? You installing OMV releases that are not released yet but in testing/development stage?
      Isn't that the idea of alpha/beta releases to get users to test it and report back their findings.

      So let's rephrase: "Is this a bug in OMV4 or what should I do to get my RAID visible in OMV GUI?"
      Odroid HC2 - armbian - Seagate ST4000DM004 - OMV4.x | Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - Intenso SSD 120GB - OMV4.x
      :!: Backup - Solutions to common problems in OMV - OMV setup videos - OMV4 Documentation :!:
    • What is the output of

      Shell-Script

      1. # udevadm info --query=property --name=/dev/md0
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • votdev wrote:

      What is the output of

      Shell-Script

      1. # udevadm info --query=property --name=/dev/md0
      please find output below:


      Source Code

      1. # udevadm info --query=property --name=/dev/md0
      2. DEVLINKS=/dev/disk/by-id/md-name-nas:Matrix /dev/disk/by-id/md-uuid-df704e6d:b0ecd791:5d0909cd:b0c6a6c3
      3. DEVNAME=/dev/md0
      4. DEVPATH=/devices/virtual/block/md0
      5. DEVTYPE=disk
      6. MAJOR=9
      7. MD_DEVICES=2
      8. MD_DEVICE_sdc_DEV=/dev/sdc
      9. MD_DEVICE_sdc_ROLE=0
      10. MD_DEVICE_sdd_DEV=/dev/sdd
      11. MD_DEVICE_sdd_ROLE=1
      12. MD_LEVEL=raid1
      13. MD_METADATA=1.2
      14. MD_NAME=nas:Matrix
      15. MD_UUID=df704e6d:b0ecd791:5d0909cd:b0c6a6c3
      16. MINOR=0
      17. SUBSYSTEM=block
      18. SYSTEMD_WANTS=mdmonitor.service
      19. TAGS=:systemd:
      20. USEC_INITIALIZED=3549891
      Display All
    • I've found something else related to this issue. When I tried to add Shared folder I get following error:

      Source Code

      1. Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; blkid -p -o full '/dev/md0' 2>&1' with exit code '8': /dev/md0: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details)
      2. Error #0:
      3. OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; blkid -p -o full '/dev/md0' 2>&1' with exit code '8': /dev/md0: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details) in /usr/share/php/openmediavault/system/process.inc:175
      4. Stack trace:
      5. #0 /usr/share/php/openmediavault/system/filesystem/filesystem.inc(115): OMV\System\Process->execute(Array)
      6. #1 /usr/share/php/openmediavault/system/filesystem/filesystem.inc(757): OMV\System\Filesystem\Filesystem->getData()
      7. #2 /usr/share/openmediavault/engined/rpc/sharemgmt.inc(97): OMV\System\Filesystem\Filesystem->getDescription()
      8. #3 [internal function]: OMVRpcServiceShareMgmt->getCandidates(Array, Array)
      9. #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      10. #5 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getCandidates', Array, Array)
      11. #6 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('ShareMgmt', 'getCandidates', Array, Array, 1)
      12. #7 {main}
      Display All


      I can't chose "Device" to add "Shared folder" due to this error. Totally confused, how is that possible to get two filesystems on array, I think that is not true

      fdisk -l

      Source Code

      1. # fdisk -l
      2. Disk /dev/sdb: 238.5 GiB, 256060514304 bytes, 500118192 sectors
      3. Units: sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 512 bytes
      5. I/O size (minimum/optimal): 512 bytes / 512 bytes
      6. Disklabel type: gpt
      7. Disk identifier: 28024B9D-9A47-4130-9349-83B733347DCF
      8. Device Start End Sectors Size Type
      9. /dev/sdb1 2048 500118158 500116111 238.5G Linux filesystem
      10. Disk /dev/sda: 238.5 GiB, 256060514304 bytes, 500118192 sectors
      11. Units: sectors of 1 * 512 = 512 bytes
      12. Sector size (logical/physical): 512 bytes / 512 bytes
      13. I/O size (minimum/optimal): 512 bytes / 512 bytes
      14. Disklabel type: gpt
      15. Disk identifier: 0EBBA207-0B19-4F27-8BEC-70088765E75B
      16. Device Start End Sectors Size Type
      17. /dev/sda1 2048 4095 2048 1M BIOS boot
      18. /dev/sda2 4096 479803391 479799296 228.8G Linux filesystem
      19. /dev/sda3 479803392 500117503 20314112 9.7G Linux swap
      20. Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
      21. Units: sectors of 1 * 512 = 512 bytes
      22. Sector size (logical/physical): 512 bytes / 512 bytes
      23. I/O size (minimum/optimal): 512 bytes / 512 bytes
      24. Disk /dev/md0: 1.8 TiB, 2000264691712 bytes, 3906766976 sectors
      25. Units: sectors of 1 * 512 = 512 bytes
      26. Sector size (logical/physical): 512 bytes / 512 bytes
      27. I/O size (minimum/optimal): 512 bytes / 512 bytes
      28. Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
      29. Units: sectors of 1 * 512 = 512 bytes
      30. Sector size (logical/physical): 512 bytes / 512 bytes
      31. I/O size (minimum/optimal): 512 bytes / 512 bytes
      32. Disk /dev/sde: 477 GiB, 512110190592 bytes, 1000215216 sectors
      33. Units: sectors of 1 * 512 = 512 bytes
      34. Sector size (logical/physical): 512 bytes / 4096 bytes
      35. I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
      36. Disklabel type: dos
      37. Disk identifier: 0x893c809b
      38. Device Boot Start End Sectors Size Id Type
      39. /dev/sde1 2048 1000215215 1000213168 477G 83 Linux
      Display All
    • I am also confused that blkid does not display the file system. There seems to be no problem with the MD device itself. So the problem is not OMV, because if file systems not shown by 'blkid', then OMV can not detect them.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Does lsblk show the md?

      From blkid manpage.

      Source Code

      1. It is recommended to use lsblk(8) command to get information about block devices rather than blkid. lsblk(8) provides more information, better control on output formatting and it does not require root permissions to get actual information.
      Also df -kh seems to show file system info well.
      If you make it idiot proof, somebody will build a better idiot.

      The post was edited 1 time, last by donh ().

    • Command lsblk is displaying md0 raid array, look at row 12 & 14:


      Source Code

      1. # lsblk
      2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      3. sda 8:0 0 477G 0 disk
      4. `-sda1 8:1 0 477G 0 part /srv/dev-disk-by-id-usb-JMicron_Tech_DB12345681BC-0-0-part1
      5. sdb 8:16 0 238.5G 0 disk
      6. |-sdb1 8:17 0 1M 0 part
      7. |-sdb2 8:18 0 228.8G 0 part /
      8. `-sdb3 8:19 0 9.7G 0 part [SWAP]
      9. sdc 8:32 0 238.5G 0 disk
      10. `-sdc1 8:33 0 238.5G 0 part /srv/dev-disk-by-label-DataSSD
      11. sdd 8:48 0 1.8T 0 disk
      12. `-md0 9:0 0 1.8T 0 raid1 /srv/dev-disk-by-id-md-name-nas-Matrix
      13. sde 8:64 0 1.8T 0 disk
      14. `-md0 9:0 0 1.8T 0 raid1 /srv/dev-disk-by-id-md-name-nas-Matrix
      Display All

      Don't think there is problem with /dev/md0 but I don't have proofs yet as well as OMV 4 broke something during upgrade, interesting issue

      df -kh shows /dev/md0 also correctly

      Source Code

      1. # df -kh
      2. Filesystem Size Used Avail Use% Mounted on
      3. udev 7.8G 0 7.8G 0% /dev
      4. tmpfs 1.6G 149M 1.5G 10% /run
      5. /dev/sdb2 226G 7.9G 206G 4% /
      6. tmpfs 7.8G 0 7.8G 0% /dev/shm
      7. tmpfs 5.0M 0 5.0M 0% /run/lock
      8. tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
      9. tmpfs 50G 3.9M 50G 1% /tmp
      10. /dev/sda1 469G 47G 399G 11% /srv/dev-disk-by-id-usb-JMicron_Tech_DB12345681BC-0-0-part1
      11. /dev/md0 1.8T 269G 1.6T 15% /srv/dev-disk-by-id-md-name-nas-Matrix
      12. /dev/sdc1 234G 16G 218G 7% /srv/dev-disk-by-label-DataSSD
      13. overlay 226G 7.9G 206G 4% /var/lib/docker/overlay2/8d82d021a383e5eee56aa9e7b6abf22e6549d9e25048aadcf83222ba26e0fa66/merged
      14. shm 64M 0 64M 0% /var/lib/docker/containers/1c34752954b20f230bdf18296ef6490d1ad1fa23a8bfab79a5a026150b159823/shm
      Display All
    • mateuxp wrote:

      Anyone any ideas? Maybe I should create a bug in some system?
      This is useless because i can only repeat myself, if the file system is not detected by 'blkid' then OMV does also not know about it.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • votdev wrote:

      mateuxp wrote:

      Anyone any ideas? Maybe I should create a bug in some system?
      This is useless because i can only repeat myself, if the file system is not detected by 'blkid' then OMV does also not know about it.
      So the question should be why does it not show up in blkid and why does it show up in other tools. Is that why the man page of blkid recommends using lsblk instead?
      If you make it idiot proof, somebody will build a better idiot.
    • Maybe kernel issues or something else, but not an OMV issue. Sorry.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Is it listed in

      Shell-Script

      1. # cat /proc/partitions
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • donh wrote:

      Is that why the man page of blkid recommends using lsblk instead?
      Please read the man page more exactly. There is nothing mentioned that blkid does not working as expected. The output of blkid is exactly what is needed, because it contains exactly the required information and it can be parsed much better than lsblk.

      By the way, lsblk also does not list your /dev/md0 device.


      mateuxp wrote:

      Command lsblk is displaying md0 raid array, look at row 12 & 14:


      Source Code

      1. # lsblk
      2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      3. sda 8:0 0 477G 0 disk
      4. `-sda1 8:1 0 477G 0 part /srv/dev-disk-by-id-usb-JMicron_Tech_DB12345681BC-0-0-part1
      5. sdb 8:16 0 238.5G 0 disk
      6. |-sdb1 8:17 0 1M 0 part
      7. |-sdb2 8:18 0 228.8G 0 part /
      8. `-sdb3 8:19 0 9.7G 0 part [SWAP]
      9. sdc 8:32 0 238.5G 0 disk
      10. `-sdc1 8:33 0 238.5G 0 part /srv/dev-disk-by-label-DataSSD
      11. sdd 8:48 0 1.8T 0 disk
      12. `-md0 9:0 0 1.8T 0 raid1 /srv/dev-disk-by-id-md-name-nas-Matrix
      13. sde 8:64 0 1.8T 0 disk
      14. `-md0 9:0 0 1.8T 0 raid1 /srv/dev-disk-by-id-md-name-nas-Matrix
      Display All

      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • votdev wrote:

      Is it listed in

      Shell-Script

      1. # cat /proc/partitions
      it is:


      Source Code

      1. cat /proc/partitions
      2. major minor #blocks name
      3. 8 0 500107608 sda
      4. 8 1 500106584 sda1
      5. 8 16 250059096 sdb
      6. 8 17 1024 sdb1
      7. 8 18 239899648 sdb2
      8. 8 19 10157056 sdb3
      9. 8 32 250059096 sdc
      10. 8 33 250058055 sdc1
      11. 8 48 1953514584 sdd
      12. 9 0 1953383488 md0
      13. 8 64 1953514584 sde
      Display All