RAID 5 missing

  • Hi guys


    Today I opened the web-GUI of my OMV server and I saw that one of my three 6TB hard drives is broken. So I tried to find out which one of them doesnt work anymore and replace it(disconnecting and reconnecting of the hard drives). After I restarted the system my whole RAID 5 was missing. How can I reconnect the RAID 5 without loosing any data?


    Thanks for your help


    AGESTREAM

  • Same problem here.
    I made a redesign of my complete NAS. On my old RAID5 (4*2TB HDD) I zeroed the superblock an. Installed the actual omv version. Connect my HDDs (now there are 5*2TB).
    The RAID5 I created was like the best practice from Intel for the Intel RST.
    http://www.intel.com/content/d…apers/rst-linux-paper.pdf
    On page 9 they describe how to create the RAID 5. All seems to be fine. The creating of the filesystem (EXT4) seems to be very fast but all looks good und there are no entrys in the syslog


    So I want to copy my backup to the new Raid5 but the RAID is missing like in the screenshot from AGESTREAM.
    The Intel RST Raid-Container is still present, but marked as inactive. But the real RAID5 is completely gone, although it's still present in the mdadm.conf.


    Just one remark although the problem doesn't seem to be an OMV issue: I'm using OMV 2.1.


    Here's the output of the requested commands:


    root@DRDnas:/var/log# cat /proc/mdstat
    Personalities :
    md127 : inactive sde[4](S) sda[3](S) sdb[2](S) sdd[1](S) sdc[0](S)
    15765 blocks super external:imsm
    unused devices: <none>


    root@DRDnas:/var/log# blkid
    /dev/sdf1: UUID="0b62b4c0-43fe-4c15-bbb1-5caefb519a24" TYPE="ext4"
    /dev/sdf5: UUID="dc23d36d-b752-45d0-bc96-bb5d6f25d2c8" TYPE="swap"
    /dev/sdb: TYPE="isw_raid_member"
    /dev/sdc: TYPE="isw_raid_member"
    /dev/sda: TYPE="isw_raid_member"
    /dev/sdd: TYPE="isw_raid_member"
    /dev/sde: TYPE="isw_raid_member"


    root@DRDnas:/var/log# fdisk -l


    Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sda doesn't contain a valid partition table


    Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdb doesn't contain a valid partition table


    Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdd doesn't contain a valid partition table


    Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdc doesn't contain a valid partition table


    Disk /dev/sdf: 32.0 GB, 32017047552 bytes
    255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00021611


    Device Boot Start End Blocks Id System
    /dev/sdf1 * 2048 59895807 29946880 83 Linux
    /dev/sdf2 59897854 62531583 1316865 5 Extended
    /dev/sdf5 59897856 62531583 1316864 82 Linux swap / Solaris


    Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sde doesn't contain a valid partition table




    Any hints how to resolve this issue and recover my raid??

    OMV 3.0.59 - Erasmus | 64 bit | 4.8 backport kernel | omvextrasorg 3.4.14
    i7 4790T | 32GB | 5*2TB RAID5 | 3ware 9650SE-12M

    5 Mal editiert, zuletzt von deeprain ()

  • Hi together,


    just some update to my previous post:
    After manually re-assembling the second RAID (the RAID5 'inside' the IMSM-Container), a resync was done and everything looked fine:


    root@DRDnas:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md126 : active raid5 sdd[4] sdc[3] sdb[2] sda[1] sde[0]
    7814045696 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [5/5] [UUUUU]


    md127 : inactive sda[4](S) sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)
    15765 blocks super external:imsm


    unused devices: <none>


    root@DRDnas:~# mdadm --detail --scan
    ARRAY /dev/md/imsm metadata=imsm UUID=22416989:8dfd5b6d:6e2859e6:44fd36c6
    ARRAY /dev/md/vol0 container=/dev/md/imsm member=0 UUID=ea1916f6:fd4160d6:c822f1b3:c35b9627


    root@DRDnas:~# mdadm --detail /dev/md127
    /dev/md127:
    Version : imsm
    Raid Level : container
    Total Devices : 5


    Working Devices : 5


    UUID : 22416989:8dfd5b6d:6e2859e6:44fd36c6
    Member Arrays : /dev/md/vol0


    Number Major Minor RaidDevice


    0 8 16 - /dev/sdb
    1 8 32 - /dev/sdc
    2 8 48 - /dev/sdd
    3 8 64 - /dev/sde
    4 8 0 - /dev/sda


    root@DRDnas:~# mdadm --detail /dev/md126
    /dev/md126:
    Container : /dev/md/imsm, member 0
    Raid Level : raid5
    Array Size : 7814045696 (7452.05 GiB 8001.58 GB)
    Used Dev Size : 1953511424 (1863.01 GiB 2000.40 GB)
    Raid Devices : 5
    Total Devices : 5


    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 0
    Spare Devices : 0


    Layout : left-asymmetric
    Chunk Size : 128K


    UUID : ea1916f6:fd4160d6:c822f1b3:c35b9627
    Number Major Minor RaidDevice State
    4 8 48 0 active sync /dev/sdd
    3 8 32 1 active sync /dev/sdc
    2 8 16 2 active sync /dev/sdb
    1 8 0 3 active sync /dev/sda
    0 8 64 4 active sync /dev/sde


    Unfortunately after a reboot, the RAID5 was gone again and only the container-raid was present and started.
    This time, I decided to do the same steps as the kernel did during boot, with the following result:
    root@DRDnas:~# mdadm --assemble --scan --verbose --auto=yes --symlink=no
    mdadm: looking for devices for /dev/md/imsm
    mdadm: no RAID superblock on /dev/sdf5
    mdadm: no RAID superblock on /dev/sdf2
    mdadm: no RAID superblock on /dev/sdf1
    mdadm: no RAID superblock on /dev/sdf
    mdadm: /dev/sdc is identified as a member of /dev/md/imsm, slot -1.
    mdadm: /dev/sde is identified as a member of /dev/md/imsm, slot -1.
    mdadm: /dev/sdd is identified as a member of /dev/md/imsm, slot -1.
    mdadm: /dev/sdb is identified as a member of /dev/md/imsm, slot -1.
    mdadm: /dev/sda is identified as a member of /dev/md/imsm, slot -1.
    mdadm: added /dev/sde to /dev/md/imsm as -1
    mdadm: added /dev/sdd to /dev/md/imsm as -1
    mdadm: added /dev/sdb to /dev/md/imsm as -1
    mdadm: added /dev/sda to /dev/md/imsm as -1
    mdadm: added /dev/sdc to /dev/md/imsm as -1
    mdadm: Container /dev/md/imsm has been assembled with 5 drives
    mdadm: looking for devices for /dev/md/vol0
    mdadm: no recogniseable superblock on /dev/sdf5
    mdadm: Cannot assemble mbr metadata on /dev/sdf2
    mdadm: no recogniseable superblock on /dev/sdf1
    mdadm: Cannot assemble mbr metadata on /dev/sdf
    mdadm: /dev/sdc has wrong uuid.
    mdadm: /dev/sde has wrong uuid.
    mdadm: /dev/sdd has wrong uuid.
    mdadm: /dev/sdb has wrong uuid.
    mdadm: /dev/sda has wrong uuid.
    mdadm: looking for devices for /dev/md/vol0
    mdadm: no recogniseable superblock on /dev/sdf5
    mdadm: Cannot assemble mbr metadata on /dev/sdf2
    mdadm: no recogniseable superblock on /dev/sdf1
    mdadm: Cannot assemble mbr metadata on /dev/sdf
    mdadm: /dev/sdc has wrong uuid.
    mdadm: /dev/sde has wrong uuid.
    mdadm: /dev/sdd has wrong uuid.
    mdadm: /dev/sdb has wrong uuid.
    mdadm: /dev/sda has wrong uuid.


    As you can see, the RAID5 can't be started, because it's claiming, that the member-devices have the wrong UUID, but I haven't changed same (at least not manually by intention)?!?!


    Any hints, how this could happen??

    OMV 3.0.59 - Erasmus | 64 bit | 4.8 backport kernel | omvextrasorg 3.4.14
    i7 4790T | 32GB | 5*2TB RAID5 | 3ware 9650SE-12M

  • Ok, here's some final update.


    After struggling with the error messages about the wrong uuids, I decided to start from scratch, i.e. removing the current raid and re-initializing everything, and this finally works.
    Unfurtunately I didn't find any hints, where these wrong uuids came from, but it seems, that they were either unintentionally assigned or were still from the old very raid before attaching the fifth harddisk.


    So if anyone interested, here are the steps I've taken:
    mdadm --stop /dev/md127
    mdadm --zero-superblock /dev/sda
    mdadm --zero-superblock /dev/sdb
    mdadm --zero-superblock /dev/sdc
    mdadm --zero-superblock /dev/sdd
    mdadm --zero-superblock /dev/sde
    dd if=/dev/zero of=/dev/sda bs=1K count=1024
    dd if=/dev/zero of=/dev/sdb bs=1K count=1024
    dd if=/dev/zero of=/dev/sdc bs=1K count=1024
    dd if=/dev/zero of=/dev/sdd bs=1K count=1024
    dd if=/dev/zero of=/dev/sde bs=1K count=1024


    After all this erasing I followed the steps from the Intel RST-Guide mentioned some posts above and now the Container-RAID as well as the RAID5 is perfectly initialized and started during boot:
    [ 1.359004] md: md127 stopped.
    [ 1.360943] md: bind<sdb>
    [ 1.361006] md: bind<sde>
    [ 1.361067] md: bind<sdd>
    [ 1.361132] md: bind<sda>
    [ 1.361202] md: bind<sdc>
    [ 1.580008] md: md126 stopped.
    [ 1.580105] md: bind<sde>
    [ 1.580162] md: bind<sda>
    [ 1.580201] md: bind<sdb>
    [ 1.580242] md: bind<sdc>
    [ 1.580291] md: bind<sdd>
    [ 1.606420] tsc: Refined TSC clocksource calibration: 2693.762 MHz
    [ 1.646411] raid6: sse2x1 11895 MB/s
    [ 1.714381] raid6: sse2x2 15558 MB/s
    [ 1.782358] raid6: sse2x4 17912 MB/s
    [ 1.850322] raid6: avx2x1 24047 MB/s
    [ 1.918308] raid6: avx2x2 27833 MB/s
    [ 1.986271] raid6: avx2x4 31794 MB/s
    [ 1.986272] raid6: using algorithm avx2x4 (31794 MB/s)
    [ 1.986282] raid6: using avx2x2 recovery algorithm
    [ 1.986595] xor: automatically using best checksumming function:
    [ 2.026258] avx : 32975.000 MB/sec
    [ 2.026460] async_tx: api initialized (async)
    [ 2.027390] md: raid6 personality registered for level 6
    [ 2.027392] md: raid5 personality registered for level 5
    [ 2.027393] md: raid4 personality registered for level 4
    [ 2.027556] md/raid:md126: device sdd operational as raid disk 0
    [ 2.027557] md/raid:md126: device sdc operational as raid disk 1
    [ 2.027558] md/raid:md126: device sdb operational as raid disk 2
    [ 2.027559] md/raid:md126: device sda operational as raid disk 3
    [ 2.027559] md/raid:md126: device sde operational as raid disk 4
    [ 2.027790] md/raid:md126: allocated 0kB
    [ 2.027808] md/raid:md126: raid level 5 active with 5 out of 5 devices, algorithm 0
    [ 2.027809] RAID conf printout:
    [ 2.027810] --- level:5 rd:5 wd:5
    [ 2.027810] disk 0, o:1, dev:sdd
    [ 2.027811] disk 1, o:1, dev:sdc
    [ 2.027812] disk 2, o:1, dev:sdb
    [ 2.027812] disk 3, o:1, dev:sda
    [ 2.027813] disk 4, o:1, dev:sde



    root@DRDnas:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md126 : active raid5 sde[4] sda[3] sdb[2] sdc[1] sdd[0]
    7814045696 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [5/5] [UUUUU]


    md127 : inactive sde[4](S) sdd[3](S) sdc[2](S) sdb[1](S) sda[0](S)
    5525 blocks super external:imsm


    unused devices: <none>


    root@DRDnas:~# mdadm --detail /dev/md126
    /dev/md126:
    Container : /dev/md/imsm, member 0
    Raid Level : raid5
    Array Size : 7814045696 (7452.05 GiB 8001.58 GB)
    Used Dev Size : 1953511424 (1863.01 GiB 2000.40 GB)
    Raid Devices : 5
    Total Devices : 5


    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 0
    Spare Devices : 0


    Layout : left-asymmetric
    Chunk Size : 128K


    UUID : dc817f82:c362fad6:8f95e2d5:6ad21d2d
    Number Major Minor RaidDevice State
    0 8 48 0 active sync /dev/sdd
    1 8 32 1 active sync /dev/sdc
    2 8 16 2 active sync /dev/sdb
    3 8 0 3 active sync /dev/sda
    4 8 64 4 active sync /dev/sde

    OMV 3.0.59 - Erasmus | 64 bit | 4.8 backport kernel | omvextrasorg 3.4.14
    i7 4790T | 32GB | 5*2TB RAID5 | 3ware 9650SE-12M

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!