Missing Raid Array, LVM, FS after hard reset.

  • Need some help.
    My system has a Raid 5 array made up of 4x750Gb Sata disks, on top of that I am using LVM with 3 LV's.
    While building a second raid 10 array for some other disks my server froze to the point that it was completely unresponsive over network and on the console and I had to do a power button reset. After which my primary raid 5 Array is now missing. I have managed to manually re-assemble it with mdadm, and then re-scan LVM and get it to see the LVM volumes but it I haven't yet gotten it to recognize the file systems on there and re-mount them. After reboot it goes back to the way it was.


    mdadm gives me the error: ARRAY line /dev/md/750Array has no identity information


    Output of mdadm --examine:
    root@fileyx64:~# mdadm --examine /dev/sde
    mdadm: ARRAY line /dev/md/750Array has no identity information.
    /dev/sde:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x0
    Array UUID : 5476786c:af745935:5e3b3acb:7a5f449a
    Name : fileyx64:750Array (local to host fileyx64)
    Creation Time : Fri Aug 23 23:40:27 2013
    Raid Level : raid5
    Raid Devices : 4


    Avail Dev Size : 1465147120 (698.64 GiB 750.16 GB)
    Array Size : 4395439104 (2095.91 GiB 2250.46 GB)
    Used Dev Size : 1465146368 (698.64 GiB 750.15 GB)
    Data Offset : 2048 sectors
    Super Offset : 8 sectors
    State : clean
    Device UUID : dcb1ac32:c03e527e:cb52ef97:cda1ae68


    Update Time : Sun Dec 22 10:52:29 2013
    Checksum : e0df60fa - correct
    Events : 136


    Layout : left-symmetric
    Chunk Size : 512K


    Device Role : Active device 0
    Array State : AAAA ('A' == active, '.' == missing)
    root@fileyx64:~# mdadm --examine /dev/sdf
    mdadm: ARRAY line /dev/md/750Array has no identity information.
    /dev/sdf:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x0
    Array UUID : 5476786c:af745935:5e3b3acb:7a5f449a
    Name : fileyx64:750Array (local to host fileyx64)
    Creation Time : Fri Aug 23 23:40:27 2013
    Raid Level : raid5
    Raid Devices : 4


    Avail Dev Size : 1465147120 (698.64 GiB 750.16 GB)
    Array Size : 4395439104 (2095.91 GiB 2250.46 GB)
    Used Dev Size : 1465146368 (698.64 GiB 750.15 GB)
    Data Offset : 2048 sectors
    Super Offset : 8 sectors
    State : clean
    Device UUID : 2df1fa19:f5dbc8d9:ca87bd0e:8a20caf2


    Update Time : Sun Dec 22 10:52:29 2013
    Checksum : 248df13e - correct
    Events : 136


    Layout : left-symmetric
    Chunk Size : 512K


    Device Role : Active device 1
    Array State : AAAA ('A' == active, '.' == missing)
    root@fileyx64:~# mdadm --examine /dev/sdh
    mdadm: ARRAY line /dev/md/750Array has no identity information.
    /dev/sdh:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x0
    Array UUID : 5476786c:af745935:5e3b3acb:7a5f449a
    Name : fileyx64:750Array (local to host fileyx64)
    Creation Time : Fri Aug 23 23:40:27 2013
    Raid Level : raid5
    Raid Devices : 4


    Avail Dev Size : 1465147120 (698.64 GiB 750.16 GB)
    Array Size : 4395439104 (2095.91 GiB 2250.46 GB)
    Used Dev Size : 1465146368 (698.64 GiB 750.15 GB)
    Data Offset : 2048 sectors
    Super Offset : 8 sectors
    State : clean
    Device UUID : ae5fcd60:22936c95:fb2c4be7:b4b1f56e


    Update Time : Sun Dec 22 10:52:29 2013
    Checksum : 7bbd4d48 - correct
    Events : 136


    Layout : left-symmetric
    Chunk Size : 512K


    Device Role : Active device 2
    Array State : AAAA ('A' == active, '.' == missing)
    root@fileyx64:~# mdadm --examine /dev/sdi
    mdadm: ARRAY line /dev/md/750Array has no identity information.
    /dev/sdi:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x0
    Array UUID : 5476786c:af745935:5e3b3acb:7a5f449a
    Name : fileyx64:750Array (local to host fileyx64)
    Creation Time : Fri Aug 23 23:40:27 2013
    Raid Level : raid5
    Raid Devices : 4


    Avail Dev Size : 1465147120 (698.64 GiB 750.16 GB)
    Array Size : 4395439104 (2095.91 GiB 2250.46 GB)
    Used Dev Size : 1465146368 (698.64 GiB 750.15 GB)
    Data Offset : 2048 sectors
    Super Offset : 8 sectors
    State : clean
    Device UUID : e2c2b9f6:d944a4b5:e3d733f0:a43ca6ed


    Update Time : Sun Dec 22 10:52:29 2013
    Checksum : b97a980d - correct
    Events : 136


    Layout : left-symmetric
    Chunk Size : 512K


    Device Role : Active device 3
    Array State : AAAA ('A' == active, '.' == missing)

  • I am able to force the array to assemble and it completes sucessfully however will disappear again on reboot.


    root@fileyx64:~# mdadm --verbose --assemble --force /dev/md/750Array /dev/sde /dev/sdf /dev/sdh /dev/sdi
    mdadm: ARRAY line /dev/md/750Array has no identity information.
    mdadm: looking for devices for /dev/md/750Array
    mdadm: /dev/sde is identified as a member of /dev/md/750Array, slot 0.
    mdadm: /dev/sdf is identified as a member of /dev/md/750Array, slot 1.
    mdadm: /dev/sdh is identified as a member of /dev/md/750Array, slot 2.
    mdadm: /dev/sdi is identified as a member of /dev/md/750Array, slot 3.
    mdadm: added /dev/sdf to /dev/md/750Array as 1
    mdadm: added /dev/sdh to /dev/md/750Array as 2
    mdadm: added /dev/sdi to /dev/md/750Array as 3
    mdadm: added /dev/sde to /dev/md/750Array as 0
    mdadm: /dev/md/750Array has been started with 4 drives.


    After running the above, the array shows up in OMV Raid Management page.
    root@fileyx64:~# pvscan
    PV /dev/md127 VG vg1 lvm2 [2.05 TiB / 556.15 GiB free]
    PV /dev/md8 VG hitachiVG lvm2 [745.22 GiB / 0 free]
    Total: 2 [2.77 TiB] / in use: 2 [2.77 TiB] / in no VG: 0 [0 ]
    root@fileyx64:~# vgscan
    Reading all physical volumes. This may take a while...
    Found volume group "vg1" using metadata type lvm2
    Found volume group "hitachiVG" using metadata type lvm2
    root@fileyx64:~# lvscan
    inactive '/dev/vg1/nas' [1.01 TiB] inherit
    inactive '/dev/vg1/own' [503.55 GiB] inherit
    inactive '/dev/vg1/test' [4.25 GiB] inherit
    ACTIVE '/dev/hitachiVG/hitachiLV' [745.22 GiB] inherit


    Once I have the array back I am able to do LVM scans and get LVM back.
    root@fileyx64:~# pvscan
    PV /dev/md127 VG vg1 lvm2 [2.05 TiB / 556.15 GiB free]
    PV /dev/md8 VG hitachiVG lvm2 [745.22 GiB / 0 free]
    Total: 2 [2.77 TiB] / in use: 2 [2.77 TiB] / in no VG: 0 [0 ]
    root@fileyx64:~# vgscan
    Reading all physical volumes. This may take a while...
    Found volume group "vg1" using metadata type lvm2
    Found volume group "hitachiVG" using metadata type lvm2
    root@fileyx64:~# lvscan
    inactive '/dev/vg1/nas' [1.01 TiB] inherit
    inactive '/dev/vg1/own' [503.55 GiB] inherit
    inactive '/dev/vg1/test' [4.25 GiB] inherit
    ACTIVE '/dev/hitachiVG/hitachiLV' [745.22 GiB] inherit


    The part I have not figured out yet is how do I get the File systems to be recognized again & remounted, and why it gets wiped out on reboot. I have not yet tried manually mounting them. Any help someone can provide is appreciated. Ill grab any info you need to help out.

  • Move it to the right subforum.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • chente

    Hat das Thema geschlossen.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!