"Missing" RAID filesystem

  • Hi,


    I've upgraded my OMV 3 to newest 4.0.14-1 Arrakis. After that I've noticed that filesystem of RAID 1 is missing in "Filesystem" tab:

    RAID1 labeled Matrix is mounted and in clean state:

    also I've checked that by ssh, I can see files on Matrix raid.
    Basic commands to troubleshoot problem:


    cat /proc/mdstat

    Code
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sde[1] sdd[0]
    1953383488 blocks super 1.2 [2/2] [UU]
    bitmap: 0/15 pages [0KB], 65536KB chunk
    unused devices: <none>


    blkid


    Code
    root@secure:/# blkid
    /dev/sda1: LABEL="CACHE" UUID="a08721ac-0b40-4d34-8508-7550b76a803d" TYPE="ext4" PARTUUID="893c809b-01"
    /dev/sdb2: UUID="d7446739-3636-4258-acef-4e44c1a374e2" TYPE="ext4" PARTUUID="64a507c7-361e-4c04-b4e2-aed8becdcb4c"
    /dev/sdc1: LABEL="DataSSD" UUID="c26c316d-7469-45f0-b509-58eb516654bd" TYPE="ext4" PARTUUID="bff87c46-2a18-401f-a560-9d4e24208c4d"
    /dev/sdd: UUID="df704e6d-b0ec-d791-5d09-09cdb0c6a6c3" UUID_SUB="17fdb3c3-ae58-f912-c38f-a3b06d0f7a24" LABEL="nas:Matrix" TYPE="linux_raid_member"
    /dev/sde: UUID="df704e6d-b0ec-d791-5d09-09cdb0c6a6c3" UUID_SUB="a28b7c92-e3ed-1bbb-0467-55016d663085" LABEL="nas:Matrix" TYPE="linux_raid_member"
    /dev/sdb1: PARTUUID="0cb69419-e832-4e3f-8f2d-7fd377871fc4"


    fdisk -l | grep "Disk "

    Code
    root@secure:/# fdisk -l | grep "Disk "
    Disk /dev/sda: 477 GiB, 512110190592 bytes, 1000215216 sectors
    Disk identifier: 0x893c809b
    Disk /dev/sdb: 238.5 GiB, 256060514304 bytes, 500118192 sectors
    Disk identifier: 0EBBA207-0B19-4F27-8BEC-70088765E75B
    Disk /dev/sdc: 238.5 GiB, 256060514304 bytes, 500118192 sectors
    Disk identifier: 28024B9D-9A47-4130-9349-83B733347DCF
    Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/md0: 1.8 TiB, 2000264691712 bytes, 3906766976 sectors
    Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors


    cat /etc/mdadm/mdadm.conf


    mdadm --detail --scan --verbose

    Code
    root@secure:/# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=nas:Matrix UUID=df704e6d:b0ecd791:5d0909cd:b0c6a6c3
    devices=/dev/sdd,/dev/sde


    uname -a

    Code
    root@secure:/# uname -a
    Linux secure 4.13.0-0.bpo.1-amd64 #1 SMP Debian 4.13.13-1~bpo9+1 (2017-11-22) x86_64 GNU/Linux


    omv-sysinfo



    Please help me to resolve this issue,
    thank you in advance for every reply!


    Mateusz

  • Which issue exactly? You installing OMV releases that are not released yet but in testing/development stage?

    Isn't that the idea of alpha/beta releases to get users to test it and report back their findings.


    So let's rephrase: "Is this a bug in OMV4 or what should I do to get my RAID visible in OMV GUI?"

  • What is the output of


    Bash
    # udevadm info --query=property --name=/dev/md0

    please find output below:



  • I've found something else related to this issue. When I tried to add Shared folder I get following error:



    I can't chose "Device" to add "Shared folder" due to this error. Totally confused, how is that possible to get two filesystems on array, I think that is not true


    fdisk -l


  • I am also confused that blkid does not display the file system. There seems to be no problem with the MD device itself. So the problem is not OMV, because if file systems not shown by 'blkid', then OMV can not detect them.

  • Does lsblk show the md?


    From blkid manpage.


    Code
    It is recommended to use lsblk(8) command to get information about block devices rather than blkid. lsblk(8) provides more information, better control on output formatting and it does not require root permissions to get actual information.

    Also df -kh seems to show file system info well.

    If you make it idiot proof, somebody will build a better idiot.

    Edited once, last by donh ().

  • Command lsblk is displaying md0 raid array, look at row 12 & 14:




    Don't think there is problem with /dev/md0 but I don't have proofs yet as well as OMV 4 broke something during upgrade, interesting issue


    df -kh shows /dev/md0 also correctly


  • This is useless because i can only repeat myself, if the file system is not detected by 'blkid' then OMV does also not know about it.

    So the question should be why does it not show up in blkid and why does it show up in other tools. Is that why the man page of blkid recommends using lsblk instead?

    If you make it idiot proof, somebody will build a better idiot.

  • Is that why the man page of blkid recommends using lsblk instead?

    Please read the man page more exactly. There is nothing mentioned that blkid does not working as expected. The output of blkid is exactly what is needed, because it contains exactly the required information and it can be parsed much better than lsblk.


    By the way, lsblk also does not list your /dev/md0 device.



    Command lsblk is displaying md0 raid array, look at row 12 & 14:



  • Is it listed in


    Bash
    # cat /proc/partitions

    it is:



Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!