File System missing for omv 2.x created RAID 1 after clean install of omv 4.1.22

  • I could use some help, please...

    I just did a clean install of OMV 4.1.22 (from 2.x) and my RAID 1 File Systems was not viable.

    • I did the install with the RAID drives disconnected, shut down server, connected the drives and booted the system.
    • The disks are visible in the UI Storage -> Disks
    • The /dev/md127 device is visible in the UI Storage -> Raid Management and is in a Clean state
    • The device is NOT visible in the UI Storage -> File System
    • Using the SSH command line, I can manually mount /dev/md127 and see the folders and files.

    How do I add the File System without wiping the drives?

    Thank you,
    Michael ?(

  • cat /proc/mdstat

    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid1 sdb[1] sda[0]
    2930135360 blocks super 1.2 [2/2] [UU]
    unused devices: <none>
  • blkid

    /dev/sdc1: UUID="dfcfd4d0-f90e-4f0b-8b02-2f27a2cf3933" TYPE="ext4" PARTUUID="547183c6-01"
    /dev/sda: UUID="013a05fd-f0ec-2693-7783-97f521f7112b" UUID_SUB="3fc8f6b3-a18f-2a1d-e2a0-902bb7eab1a9" LABEL="nas:Mirror02" TYPE="linux_raid_member"
    /dev/sdb: UUID="013a05fd-f0ec-2693-7783-97f521f7112b" UUID_SUB="0941be5c-5b58-ec7d-c4c0-cf9c9853bb27" LABEL="nas:Mirror02" TYPE="linux_raid_member"
  • fdisk -l | grep "Disk "

    Disk /dev/sdc: 111.8 GiB, 120034123776 bytes, 234441648 sectors
    Disk identifier: 0x547183c6
    Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/md127: 2.7 TiB, 3000458608640 bytes, 5860270720 sectors
  • cat /etc/mdadm/mdadm.conf

  • This is sadly a known bug of blkid. It seems that it is sometimes not possible to identify the filesystems on a RAID that is created with previous Debian versions. You're not the first with this problem. Sadly there is no possibility to workaround this because OMV uses blkid to identify the filesystems. Most users backuped their data and recreated the RAID.

  • Hi Geaves,

    I ran omv-mkconf mdadm and now mdadm.conf is showing the new definition. However, the UI is not showing a matching file system. Any other ideas?

  • Hi votdev,

    I'm hoping I can get it working. However I have copied the data off to another drive from my manual mount as a backup.
    Fingers crossed I can get it working without formatting the drives.

    Thank you,

  • I caught the following two screens this morning. My monitors do not wake up fast enough to see these when the system first boots.

    What is the ACPI BIOS Error?
    What is the tpm_try_transmit: send(): error -62?

    These errors appear after the blue Debian boot/selection screen.

    Thank you,

  • I gave the kernel a try, but it did not solve my ACPI errors or my missing file system.

    If you have the data backed up (you should since raid is backup :) ), you could try upgrading the superblock version to see if it fixes the problem -…tween_superblock_versions

    omv 5.5.5 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.5 plugins source code and issue tracker - github

    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • I give up.

    How do I delete all of my RAID configurations and start again? When I try to delete them using the UI, nothing happens when I click the OK button after selecting the two drives in the RAID 1 array.

    Thank you,

  • How do I delete all of my RAID configurations and start again?

    From memory you can only remove one drive at a time before you can delete the array, and my apologies if I looked at your output correctly this TYPE="ext4" is missing from the blkid output, hence @votdev reference.

    Another option instead of using a Raid option is to use one drive for data and run the rsnapshot plugin to the second.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!