File System missing for omv 2.x created RAID 1 after clean install of omv 4.1.22

    • OMV 4.x
    • Resolved
    • File System missing for omv 2.x created RAID 1 after clean install of omv 4.1.22

      I could use some help, please...

      I just did a clean install of OMV 4.1.22 (from 2.x) and my RAID 1 File Systems was not viable.


      • I did the install with the RAID drives disconnected, shut down server, connected the drives and booted the system.
      • The disks are visible in the UI Storage -> Disks
      • The /dev/md127 device is visible in the UI Storage -> Raid Management and is in a Clean state
      • The device is NOT visible in the UI Storage -> File System
      • Using the SSH command line, I can manually mount /dev/md127 and see the folders and files.
      How do I add the File System without wiping the drives?

      Thank you,
      Michael ?(
    • blkid

      Source Code

      1. /dev/sdc1: UUID="dfcfd4d0-f90e-4f0b-8b02-2f27a2cf3933" TYPE="ext4" PARTUUID="547183c6-01"
      2. /dev/sda: UUID="013a05fd-f0ec-2693-7783-97f521f7112b" UUID_SUB="3fc8f6b3-a18f-2a1d-e2a0-902bb7eab1a9" LABEL="nas:Mirror02" TYPE="linux_raid_member"
      3. /dev/sdb: UUID="013a05fd-f0ec-2693-7783-97f521f7112b" UUID_SUB="0941be5c-5b58-ec7d-c4c0-cf9c9853bb27" LABEL="nas:Mirror02" TYPE="linux_raid_member"
    • fdisk -l | grep "Disk "

      Source Code

      1. Disk /dev/sdc: 111.8 GiB, 120034123776 bytes, 234441648 sectors
      2. Disk identifier: 0x547183c6
      3. Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      4. Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      5. Disk /dev/md127: 2.7 TiB, 3000458608640 bytes, 5860270720 sectors
    • cat /etc/mdadm/mdadm.conf

      Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      Display All
    • This is sadly a known bug of blkid. It seems that it is sometimes not possible to identify the filesystems on a RAID that is created with previous Debian versions. You're not the first with this problem. Sadly there is no possibility to workaround this because OMV uses blkid to identify the filesystems. Most users backuped their data and recreated the RAID.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Hi Geaves,


      I ran omv-mkconf mdadm and now mdadm.conf is showing the new definition. However, the UI is not showing a matching file system. Any other ideas?


      Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. ARRAY /dev/md/nas:Mirror02 metadata=1.2 name=nas:Mirror02 UUID=013a05fd:f0ec2693:778397f5:21f7112b
      Display All
    • I caught the following two screens this morning. My monitors do not wake up fast enough to see these when the system first boots.

      What is the ACPI BIOS Error?
      What is the tpm_try_transmit: send(): error -62?

      These errors appear after the blue Debian boot/selection screen.

      Thank you,
      Michael
      Images
      • IMG_6353.JPG

        561.26 kB, 1,632×1,224, viewed 23 times
      • IMG_6355.JPG

        580.26 kB, 1,632×1,224, viewed 21 times
    • Some users have had luck switching to the proxmox kernel that you can install with omv-extras.
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • mlucas wrote:

      I gave the kernel a try, but it did not solve my ACPI errors or my missing file system.
      If you have the data backed up (you should since raid is backup :) ), you could try upgrading the superblock version to see if it fixes the problem - raid.wiki.kernel.org/index.php…tween_superblock_versions
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • mlucas wrote:

      How do I delete all of my RAID configurations and start again?
      From memory you can only remove one drive at a time before you can delete the array, and my apologies if I looked at your output correctly this TYPE="ext4" is missing from the blkid output, hence @votdev reference.

      Another option instead of using a Raid option is to use one drive for data and run the rsnapshot plugin to the second.
      Raid is not a backup! Would you go skydiving without a parachute?