Rebooted and raid disappeared

  • I was backing up and my system randomly rebooted. After reboot I couldnt access my files. I checked OMV and my raid is gone. I also checked my HDDs and in OMV one is missing from the list. This is what blkid gives me


    blkid

    /dev/sdb: UUID="cf2c9347-55c5-45b7-71a4-b1ee090e62bb" UUID_SUB="33b84d96-29b2-400e-15d1-05125a50321b" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdc: UUID="cf2c9347-55c5-45b7-71a4-b1ee090e62bb" UUID_SUB="5105237a-4f94-d090-95ba-4ab113ee894f" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdd: UUID="cf2c9347-55c5-45b7-71a4-b1ee090e62bb" UUID_SUB="0bcbdb06-4bb8-94db-868b-d887504b5e3d" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sda: UUID="cf2c9347-55c5-45b7-71a4-b1ee090e62bb" UUID_SUB="c90ec44d-fdd5-404c-4ed4-7794e3453a6b" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdf: UUID="cf2c9347-55c5-45b7-71a4-b1ee090e62bb" UUID_SUB="8c351a8b-64cc-9c56-41c0-660a23eee930" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdg: UUID="cf2c9347-55c5-45b7-71a4-b1ee090e62bb" UUID_SUB="b528e4da-b283-bb18-56a2-707d361194a1" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sde: UUID="cf2c9347-55c5-45b7-71a4-b1ee090e62bb" UUID_SUB="4b93f915-4258-1b24-a9e9-0beffeff2835" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdi: UUID="cf2c9347-55c5-45b7-71a4-b1ee090e62bb" UUID_SUB="3bab9df0-3a53-41a7-1207-8972eb0169a2" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdh: UUID="cf2c9347-55c5-45b7-71a4-b1ee090e62bb" UUID_SUB="426106c7-9bfc-6657-64ab-888583638801" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdj1: UUID="853756ed-d88f-4c62-a271-7c5c5e8c43c0" TYPE="ext4" PTTYPE="dos" PARTUUID="bd16fb70-01"

    /dev/sdj5: UUID="3da6b5c7-f165-48cd-a6df-dd632b06286e" TYPE="swap" PARTUUID="bd16fb70-05"


    Looking for any and all help to just be able to get back in and get my data.


    Edit: I reseated everything and the missing HDD appeared but then I got the following error Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; blockdev --getsize64 '/dev/sdk' 2>&1' with exit code '1': blockdev: cannot open /dev/sdk: No such device or address

    • Offizieller Beitrag

    I noticed your post late last night as I was shutting down, I see this morning you have created another thread. ?(


    More information is required cat /proc/mdstat mdadm --detail /dev/mdX where X is the raid reference, e.g. md0, md127


    Some hardware information might be useful as well and the question, do you have a backup

  • Sorry for the second post. I thought my issue would be more viewable here since its a new raid issue and not related (I dont think) to the space issue. Even more confusion. I found the "bad" drive. It has sound and movement. For lsblk now it shows all my desks so I dont think the disk is bad. The 10 data and the 1 install but no raid.


    root@openmediavault:~# lsblk

    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

    sda 8:0 0 9.1T 0 disk

    sdb 8:16 0 9.1T 0 disk

    sdc 8:32 0 9.1T 0 disk

    sdd 8:48 0 9.1T 0 disk

    sde 8:64 0 9.1T 0 disk

    sdf 8:80 0 9.1T 0 disk

    sdg 8:96 0 9.1T 0 disk

    sdh 8:112 0 9.1T 0 disk

    sdi 8:128 0 9.1T 0 disk

    sdj 8:144 0 9.1T 0 disk

    sdk 8:160 0 223.6G 0 disk

    ├─sdk1 8:161 0 202.1G 0 part /

    ├─sdk2 8:162 0 1K 0 part

    └─sdk5 8:165 0 21.5G 0 part





    When I start OMV it gives me tons of errors from SDK which is my install. It seems if I have both my install SSD and the "bad" HDD I get the error message I posted above. If I unplug the "bad" drive I don't get that error.


    cat /proc/mdstat   

    [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : inactive sdi[6] sdg[2] sdf[5] sdc[1] sdj[9] sdh[7] sda[8] sdb[4] sde[0]

    87896752128 blocks super 1.2


    root@openmediavault:~# mdadm --detail /dev/mdraid1

    mdadm: cannot open /dev/mdraid1: No such file or directory


    root@openmediavault:~# mdadm --detail /dev/md[raid1]

    mdadm: cannot open /dev/md[raid1]: No such file or directory


    Not sure if this was done correctly so I tried both ways.


    My backup had an accident and was fried. Which is why I was redoing the backup when this occurred. Im just hoping theres a way to get back in and save my data. Thank you.

  • According to the output of mdstat it should be mdadm --detail /dev/md0

    root@openmediavault:~# mdadm --detail /dev/md0

    /dev/md0:

    Version : 1.2

    Creation Time : Mon Apr 8 22:55:52 2019

    Raid Level : raid5

    Used Dev Size : 18446744073709551615

    Raid Devices : 10

    Total Devices : 9

    Persistence : Superblock is persistent


    Update Time : Tue Apr 13 09:42:41 2021

    State : active, FAILED, Not Started

    Active Devices : 9

    Working Devices : 9

    Failed Devices : 0

    Spare Devices : 0


    Layout : left-symmetric

    Chunk Size : 512K


    Consistency Policy : unknown


    Name : openmediavault:0

    UUID : cf2c9347:55c545b7:71a4b1ee:090e62bb

    Events : 6937212


    Number Major Minor RaidDevice State

    - 0 0 0 removed

    - 0 0 1 removed

    - 0 0 2 removed

    - 0 0 3 removed

    - 0 0 4 removed

    - 0 0 5 removed

    - 0 0 6 removed

    - 0 0 7 removed

    - 0 0 8 removed

    - 0 0 9 removed


    - 8 64 0 sync /dev/sde

    - 8 32 1 sync /dev/sdc

    - 8 0 8 sync /dev/sda

    - 8 144 9 sync /dev/sdj

    - 8 112 7 sync /dev/sdh

    - 8 80 5 sync /dev/sdf

    - 8 16 4 sync /dev/sdb

    - 8 128 6 sync /dev/sdi

    - 8 96 2 sync /dev/sdg


    Thank you

    • Offizieller Beitrag

    Well according to that all the drives have been removed ?(


    What's the output of mdadm --detail /dev/sda


    With the output copy from the terminal and paste into </> on the menu, it's easier to read

  • <root@openmediavault:~# mdadm --detail /dev/sda

    mdadm: /dev/sda does not appear to be an md device>


    I dont understand how this says they are removed but blkid shows them all. Is this due to the error I get from within OMV webui?


    <Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; blockdev --getsize64 '/dev/sdd' 2>&1' with exit code '1': blockdev: cannot open /dev/sdd: No such device or address>


    <

    Error #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; blockdev --getsize64 '/dev/sdd' 2>&1' with exit code '1': blockdev: cannot open /dev/sdd: No such device or address in /usr/share/php/openmediavault/system/process.inc:182
    Stack trace:
    #0 /usr/share/php/openmediavault/system/blockdevice.inc(258): OMV\System\Process->execute(Array)
    #1 /usr/share/openmediavault/engined/rpc/diskmgmt.inc(87): OMV\System\BlockDevice->getSize()
    #2 [internal function]: Engined\Rpc\DiskMgmt->enumerateDevices(NULL, Array)
    #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #4 /usr/share/openmediavault/engined/rpc/diskmgmt.inc(122): OMV\Rpc\ServiceAbstract->callMethod('enumerateDevice...', NULL, Array)
    #5 [internal function]: Engined\Rpc\DiskMgmt->getList(Array, Array)
    #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('getList', Array, Array)
    #8 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusPi...', '/tmp/bgoutputZF...')
    #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))
    #10 /usr/share/openmediavault/engined/rpc/diskmgmt.inc(149): OMV\Rpc\ServiceAbstract->callMethodBg('getList', Array, Array)
    #11 [internal function]: Engined\Rpc\DiskMgmt->getListBg(Array, Array)
    #12 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #13 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getListBg', Array, Array)
    #14 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('DiskMgmt', 'getListBg', Array, Array, 1)
    #15 {main}
    >


    With that error I cant do anything from within the web UI. I cant even scan my disks bc once I hit scan that error immediately pops up. The only thing I can do is access portainer.

    • Offizieller Beitrag

    I dont understand how this says they are removed but blkid

    blkid is showing the block device id and the information about that device

    mdadm: /dev/sda does not appear to be an md device

    this is more concerning, that would suggest that the drive does not contain mdadm/raid signature, you could do this to all the drives but they would probably come out the same.


    Two options;

    1) Boot the system with a systemrescuecd, check this section in the sticky for degraded array and the commands to run

    2) Disconnect all the data drives, on another drive do a clean install of OMV and update it, shutdown connect the data drives, if there is raid information detected on those drives then the system will bring up the array.

    Is this due to the error I get from within OMV webui

    When does that error appear + it refers to one drive /dev/sdd, can you identify that drive and unplug it.


    As a footnote this the output from mdadm --detail I just created on a VM;

  • How could the Raid just vanish?


    That error appears whenever I go into the OMV web UI. The SSD is supposed to be my install drive. If I unplug the drive that I thought went "bad" then this error doesn't appear and I can go into the WebUI. It will show 9 HDDS and my 1 Install SSD drive.



    Here is how it looks with the "'bad" drive disconnected.

    • Offizieller Beitrag

    The SSD is supposed to be my install drive

    It's not, your system drive /dev/sdk is and that SSD has shared folders on it.

    How could the Raid just vanish

    I don't know I've never seen this before.


    Is there any output on the Raid Management screen

  • It's not, your system drive /dev/sdk is and that SSD has shared folders on it.

    I don't know I've never seen this before.


    Is there any output on the Raid Management screen

    Nothing on the Reid management screen.


    None of this makes sense. When I installed 2 years ago everything was unplugged except for the SSD. he only thing I did before the backup was erase old images in portainer since I was getting the login loop issues from a full os drive. I made sure all the old images weren't used anymore.


    Edit: can't get in with systemrescuecd. After I chose systemrescuecd it says

    Error: unknown file system

    Error: you need to load the kernel first

    • Offizieller Beitrag

    Edit: can't get in with systemrescuecd. After I chose systemrescuecd it says

    Error: unknown file system

    Error: you need to load the kernel first

    Is that from OMV-Extras? I've only tested this once and it just worked, this could be pointing to some sort of corruption of the system drive.


    You can download a systemrescuecd and create a CD or USB boot device

  • As in the output is still the same as you have posted, what about mdadm --detail /dev/md0 if a raid is fixable it can be done using systemrescuecd

    I give up. I'm about to toss this out the window. I tried a reinstall. Numerous omv-firstaid network configurations later and still no IP.


    Edit: After putting that command in systemrescue it says vcabnot open /dev/md0


    Thanks for trying

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!