Missing RAID 0 after reboot

    • OMV 3.x
    • Resolved
    • Missing RAID 0 after reboot

      Hi Guys,

      Celeron J3455
      ASROCK J3455 Mainboard
      OMV: 3.0.80

      /dev/sdc USB Flash Lexar 32GB
      /dev/sda Model: ST2000LM015-2E81 SATA 2TB
      /dev/sdb Model: ST2000LM015-2E81 SATA 2TB


      I need your help on this one. I know OMV UI doesn't support creating RAID on partitions. This is being discussed here:
      bugtracker.openmediavault.org/print_bug_page.php?bug_id=1575

      Nevertheless it sees RAID created on partitions via CMD.

      Shell-Script

      1. mdadm --create /dev/md0 --level=0 --raid-disk=2 /dev/sda1 /dev/sdb1
      2. mdadm --create /dev/md1 --level=1 --raid-disk=2 /dev/sda2 /dev/sdb2


      This works fine and the OMV UI sees the 2 RAID Arrays. Unfortunately when I reboot the md0 (RAID0) is gone, only md1 is still visible. After the reboot this is what I have:

      1.) /proc/mdstat

      Shell-Script

      1. Personalities : [raid1]
      2. md1 : active raid1 sda2[0] sdb2[1]
      3. 1900953664 blocks super 1.2 [2/2] [UU]
      4. bitmap: 0/15 pages [0KB], 65536KB chunk
      5. unused devices: <none>


      2.) blkid


      Shell-Script

      1. /dev/sda2: UUID="cb980f56-6bd6-5fa8-479c-f3ccf7261796" UUID_SUB="c369cf7e-3776-0ec7-8a84-8ee8c15a0bbe" LABEL="omv:1" TYPE="linux_raid_member" PARTUUID="571814d0-1885-4559-ac14-15cac345587f"
      2. /dev/sdb1: UUID="c23e181c-2314-1848-0493-a6608eb85018" UUID_SUB="8dcd5687-75b0-cc90-6eb2-5cfd6f62a436" LABEL="omv:0" TYPE="linux_raid_member" PARTUUID="f27aa250-9bfa-4369-b149-e41c0b14e37c"
      3. /dev/sdb2: UUID="cb980f56-6bd6-5fa8-479c-f3ccf7261796" UUID_SUB="81780ab5-44d8-855b-6f79-e2adc5b8d374" LABEL="omv:1" TYPE="linux_raid_member" PARTUUID="d89b6866-0f75-44da-81ae-a3719e9ce7a2"
      4. /dev/sdc1: UUID="9905c597-514d-4c09-a29d-eac23dd30ce6" TYPE="ext4" PARTUUID="95ae9ef9-01"
      5. /dev/sdc5: UUID="a99c20ef-bd47-4932-8dda-fda82c0b0ef2" TYPE="swap" PARTUUID="95ae9ef9-05"
      6. /dev/sda1: PTUUID="fa6140dc-ca57-4b76-916c-534e8b8565a7" PTTYPE="gpt" PARTUUID="001fc063-dcf9-4c32-80c6-43466e2be75b"
      7. /dev/md1: LABEL="STORAGE" UUID="bfef535a-4ad5-4e11-8994-d46bbfa89117" TYPE="ext4"


      3.) fdisk -l | grep "Disk "


      Shell-Script

      1. Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
      2. Disk identifier: 5BFB653E-CD3F-4C83-8555-2C1C82783CCC
      3. Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
      4. Disk identifier: 7560E4AF-718F-4431-B1DD-A10A47351A62
      5. Disk /dev/sdc: 29.8 GiB, 32008830976 bytes, 62517248 sectors
      6. Disk identifier: 0x95ae9ef9
      7. Disk /dev/md1: 1.8 TiB, 1946576551936 bytes, 3801907328 sectors

      4.) cat /etc/mdadm/mdadm.conf



      Shell-Script

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. MAILADDR me@gmail.com
      Display All

      5. mdadm --detail --scan --verbose



      Source Code

      1. ARRAY /dev/md/1 level=raid1 num-devices=2 metadata=1.2 name=omv:1 UUID=cb980f56:6bd65fa8:479cf3cc:f7261796
      2. devices=/dev/sda2,/dev/sdb2
      6. fdisk /dev/sda -l

      Shell-Script

      1. root@omv:/tmp# fdisk /dev/sda -l
      2. Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
      3. Units: sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 4096 bytes
      5. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      6. Disklabel type: gpt
      7. Disk identifier: 5BFB653E-CD3F-4C83-8555-2C1C82783CCC
      8. Device Start End Sectors Size Type
      9. /dev/sda1 2048 104859647 104857600 50G Linux RAID
      10. /dev/sda2 104859648 3907029134 3802169487 1.8T Linux RAID
      Display All


      Grateful for any suggestions.

      Thanks

      The post was edited 2 times, last by FreshMike ().

    • Got it finally working. This thread helped
      unix.stackexchange.com/questio…y-disappears-after-reboot

      mdadm --stop /dev/md<number> mdadm --zero-superblock /dev/sda1/sdb1 (do this for all your raid disks)

      dd if=/dev/zero of=/dev/sda bs=1M count=10
      This will zero out the first 10 megabytes of /dev/sda1. Repeat for /dev/sdb1. Recreate the array after that, and then /dev/md0 should come up on boot.

      - recreated Array md0
      - mdadm --examine --scan >> /etc/mdadm/mdadm.conf
      - omv-mkconf mdadm

      something was wrong with the partition signature I guess.