Missing RAID 0 after reboot

  • Hi Guys,


    Celeron J3455
    ASROCK J3455 Mainboard
    OMV: 3.0.80


    /dev/sdc USB Flash Lexar 32GB
    /dev/sda Model: ST2000LM015-2E81 SATA 2TB
    /dev/sdb Model: ST2000LM015-2E81 SATA 2TB



    I need your help on this one. I know OMV UI doesn't support creating RAID on partitions. This is being discussed here:
    http://bugtracker.openmediavau…_bug_page.php?bug_id=1575


    Nevertheless it sees RAID created on partitions via CMD.


    Bash
    mdadm --create /dev/md0 --level=0 --raid-disk=2 /dev/sda1 /dev/sdb1
    mdadm --create /dev/md1 --level=1 --raid-disk=2 /dev/sda2 /dev/sdb2


    This works fine and the OMV UI sees the 2 RAID Arrays. Unfortunately when I reboot the md0 (RAID0) is gone, only md1 is still visible. After the reboot this is what I have:


    1.) /proc/mdstat


    Bash
    Personalities : [raid1]
    md1 : active raid1 sda2[0] sdb2[1]
          1900953664 blocks super 1.2 [2/2] [UU]
          bitmap: 0/15 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>


    2.) blkid



    Bash
    /dev/sda2: UUID="cb980f56-6bd6-5fa8-479c-f3ccf7261796" UUID_SUB="c369cf7e-3776-0ec7-8a84-8ee8c15a0bbe" LABEL="omv:1" TYPE="linux_raid_member" PARTUUID="571814d0-1885-4559-ac14-15cac345587f"
    /dev/sdb1: UUID="c23e181c-2314-1848-0493-a6608eb85018" UUID_SUB="8dcd5687-75b0-cc90-6eb2-5cfd6f62a436" LABEL="omv:0" TYPE="linux_raid_member" PARTUUID="f27aa250-9bfa-4369-b149-e41c0b14e37c"
    /dev/sdb2: UUID="cb980f56-6bd6-5fa8-479c-f3ccf7261796" UUID_SUB="81780ab5-44d8-855b-6f79-e2adc5b8d374" LABEL="omv:1" TYPE="linux_raid_member" PARTUUID="d89b6866-0f75-44da-81ae-a3719e9ce7a2"
    /dev/sdc1: UUID="9905c597-514d-4c09-a29d-eac23dd30ce6" TYPE="ext4" PARTUUID="95ae9ef9-01"
    /dev/sdc5: UUID="a99c20ef-bd47-4932-8dda-fda82c0b0ef2" TYPE="swap" PARTUUID="95ae9ef9-05"
    /dev/sda1: PTUUID="fa6140dc-ca57-4b76-916c-534e8b8565a7" PTTYPE="gpt" PARTUUID="001fc063-dcf9-4c32-80c6-43466e2be75b"
    /dev/md1: LABEL="STORAGE" UUID="bfef535a-4ad5-4e11-8994-d46bbfa89117" TYPE="ext4"


    3.) fdisk -l | grep "Disk "



    Bash
    Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk identifier: 5BFB653E-CD3F-4C83-8555-2C1C82783CCC
    Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk identifier: 7560E4AF-718F-4431-B1DD-A10A47351A62
    Disk /dev/sdc: 29.8 GiB, 32008830976 bytes, 62517248 sectors
    Disk identifier: 0x95ae9ef9
    Disk /dev/md1: 1.8 TiB, 1946576551936 bytes, 3801907328 sectors


    4.) cat /etc/mdadm/mdadm.conf





    5. mdadm --detail --scan --verbose




    Code
    ARRAY /dev/md/1 level=raid1 num-devices=2 metadata=1.2 name=omv:1 UUID=cb980f56:6bd65fa8:479cf3cc:f7261796
       devices=/dev/sda2,/dev/sdb2

    6. fdisk /dev/sda -l



    Grateful for any suggestions.


    Thanks

  • Got it finally working. This thread helped
    https://unix.stackexchange.com…y-disappears-after-reboot


    mdadm --stop /dev/md<number> mdadm --zero-superblock /dev/sda1/sdb1 (do this for all your raid disks)


    dd if=/dev/zero of=/dev/sda bs=1M count=10
    This will zero out the first 10 megabytes of /dev/sda1. Repeat for /dev/sdb1. Recreate the array after that, and then /dev/md0 should come up on boot.


    - recreated Array md0
    - mdadm --examine --scan >> /etc/mdadm/mdadm.conf
    - omv-mkconf mdadm


    something was wrong with the partition signature I guess.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!