Power loss and system drive will not boot

    • OMV 1.0
    • Power loss and system drive will not boot

      After power failure on my OMV 1.19 the system would not boot from the system disk. I removed all disks from the system and added a new system disk and created a new system disk from the original ISO. After installing I shutdown the system and attached the 4 4TB drives to the system. Rebooted. Now I see the 4 drives but not the raid. I have recovered the config.xml file from the old system drive. Here are the dumps from the system currently.
      Two years ago I created a clonezilla backup but cannot find that currently.
      Is there any path for recovery of my raid?
      Originally setup with the 4 4TB as a RAID5.

      root@openmediavault:~# cat /proc/mdstat
      Personalities : [raid6] [raid5] [raid4]
      unused devices: <none>
      root@openmediavault:~# blkid
      /dev/sdd: UUID="bb43bcaf-acfb-d176-4113-2760a1f0006c" UUID_SUB="f5596406-7ef4-07ed-46b1-24b8278210f9" LABEL="OMV:Main" TYPE="linux_raid_member"
      /dev/sdc: UUID="bb43bcaf-acfb-d176-4113-2760a1f0006c" UUID_SUB="c657d56f-dd20-2543-84c8-483f57d15c04" LABEL="OMV:Main" TYPE="linux_raid_member"
      /dev/sda1: UUID="aa24a158-75c2-4c91-9688-0f55d773dd72" TYPE="ext4"
      /dev/sda5: UUID="9d1fd968-751a-410d-b2c6-8a95a2f2ea0a" TYPE="swap"
      root@openmediavault:~# fdisk -l | grep "Disk "
      Disk /dev/sdd doesn't contain a valid partition table

      WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

      Disk /dev/sdc doesn't contain a valid partition table
      Disk /dev/sda: 120.0 GB, 120034123776 bytes
      Disk identifier: 0x00094716
      Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes
      Disk identifier: 0x00000000
      root@openmediavault:~# cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
      root@openmediavault:~# mdadm --detail --scan --verbose
      root@openmediavault:~#

      ASRock X99 Extreme4
      Intel Core i7-5820K Haswell 3.3 GHz
      Crucial Ballistix Sport 8GB 2 x 4GB DDR4-2400 PC4-19200 CL16 Dual Channel
      4 WD Red Network 4TB Intellipower SATA III 6Gb/s
      1 Segate 2TTB SATA
      ASUS Radeon HD 6450
      EVGA SuperNOVA 750 Watt 80+ Gold
      Cooler Master Hyper 212 EVO Universal CPU Cooler
      Intel Core i7-5820K Haswell 3.3 GHz - ASRock X99 Extreme4 - 8GB RAM - 4x4TB WD RED in RAID5 - 2TB Seagate 7200
      OMV Release: 1.9 Kralizec
    • Re,

      there is something seriously wrong with your "4 (four!) drives", cause:
      - blkid shows only 2 (two!) autodetected raid-members
      - and fdisk shows up 3 (three!) 4TB-drives, but one of them with a GTP ...

      linndoug wrote:

      Is there any path for recovery of my raid?
      Sry, doesn't seems so - cause of missing one drive for recovering ... sde is completly missing at all! (if you can manage to reconnect this drive to your box, may be you'll get a better chance). Drive sdb seems to be damaged ...

      Sc0rp

      EDIT: close your other thread pls! (and copy your post)
    • It appears that drive SDE is part of SDB.
      Can that be taken off and made SDE?


      Another command output.
      root@openmediavault:~# lsblk
      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      sda 8:0 0 111.8G 0 disk
      ├─sda1 8:1 0 107.2G 0 part /
      ├─sda2 8:2 0 1K 0 part
      └─sda5 8:5 0 4.6G 0 part [SWAP]
      sdd 8:48 0 3.7T 0 disk
      sdb 8:16 0 3.7T 0 disk
      └─sdb1 8:17 0 3.7T 0 part
      sdc 8:32 0 3.7T 0 disk
      sr0 11:0 1 1024M 0 rom
      Intel Core i7-5820K Haswell 3.3 GHz - ASRock X99 Extreme4 - 8GB RAM - 4x4TB WD RED in RAID5 - 2TB Seagate 7200
      OMV Release: 1.9 Kralizec
    • Re,

      linndoug wrote:

      It appears that drive SDE is part of SDB.
      Where did ya see that? That can not be or happen ... under linux ;)

      lsblk shows your setup correct (as the kernel detects it) - and here is also SDE completely missing.

      Just check your cableing (inkl. power) of this drive - since sdb shows wrong partioning, sde is your only hope to get your array back ... (on R5 you need 3 of the 4 drives to recover).

      Sc0rp
    • Yes you were correct. I have rebooted and now see SDE.

      root@openmediavault:~# blkid
      /dev/sdd: UUID="bb43bcaf-acfb-d176-4113-2760a1f0006c" UUID_SUB="f5596406-7ef4-07ed-46b1-24b8278210f9" LABEL="OMV:Main" TYPE="linux_raid_member"
      /dev/sdc: UUID="bb43bcaf-acfb-d176-4113-2760a1f0006c" UUID_SUB="c657d56f-dd20-2543-84c8-483f57d15c04" LABEL="OMV:Main" TYPE="linux_raid_member"
      /dev/sda1: UUID="aa24a158-75c2-4c91-9688-0f55d773dd72" TYPE="ext4"
      /dev/sda5: UUID="9d1fd968-751a-410d-b2c6-8a95a2f2ea0a" TYPE="swap"
      /dev/sde: UUID="bb43bcaf-acfb-d176-4113-2760a1f0006c" UUID_SUB="bae65123-0c97-1f60-b0c4-6ad3ccd70895" LABEL="OMV:Main" TYPE="linux_raid_member"

      I can get some files off of the old system drive.
      How best do I recover this raid and restore my system?
      Should I upgrade to version 3 openmediavault first?

      Raid status attached. fdisk attached.

      Thanks!
      Files
      • RAID Status.txt

        (2.8 kB, downloaded 39 times, last: )
      • FDISK.txt

        (2.25 kB, downloaded 40 times, last: )
      Intel Core i7-5820K Haswell 3.3 GHz - ASRock X99 Extreme4 - 8GB RAM - 4x4TB WD RED in RAID5 - 2TB Seagate 7200
      OMV Release: 1.9 Kralizec

      The post was edited 3 times, last by linndoug ().

    • Re,

      on your second "blkid" is SDE avail. but SDB is missing ??? What the hell is going on on your box? fdisk shows SDB ... very curios.

      linndoug wrote:

      How best do I recover this raid and restore my system?
      What U can do:
      - assemble the remaining 3 drives for a "degraded array" -> and backup your data (read-only mode preferred)
      mdadm --assemble --readonly /dev/md0 /dev/sdc /dev/sde /dev/sdf (change md0 to md127 if ya want, may be readonly won't work)
      You can then escalate the commando:
      mdadm --assemble --run /dev/md0 /dev/sdc /dev/sde /dev/sdf
      mdadm --assemble --run --force /dev/md0 /dev/sdc /dev/sde /dev/sdf

      Rebuilding:
      - SDB has to be zero'ed since it has wrong partitioninfo
      dd if=/dev/zero of=/dev/sdb bs=4096 count=16
      - then array reassemble with the 3 remain. disks
      (see above)
      - adding SDB again (as spare-drive, but it will be immediately used and the rebuil will start)
      mdadm --add /dev/md0 /dev/sdb (tune md0 to md127, as got from cat/proc/mdstat!)

      If you got errors, stop doing and post them!

      linndoug wrote:

      Should I upgrade to version 3 openmediavault first?
      Nope, finish RAID-rebuild, then you may upgrade ...

      Sc0rp
    • Remember that I put in a new system drive and installed with the original OMV Release: 1.9 Kralizec

      I'm assuming that you meant -

      mdadm --assemble --readonly /dev/md0 /dev/sdc /dev/sdd /dev/sde
      or
      mdadm --assemble --readonly /dev/md127 /dev/sdc /dev/sdd /dev/sde
      as my good raid drives are sdc, sdd, sde

      The readonly failed the command

      mdadm --assemble --readonly /dev/md127 /dev/sdc /dev/sdd /dev/sde
      failed with the following error
      mdadm:option --readonly not valid in assemble mode

      mdadm --assemble /dev/md0 /dev/sdc /dev/sdd /dev/sde
      failed with the following errors
      mdadm: /dev/sdc is busy - skipping
      mdadm: /dev/sdd is busy - skipping
      mdadm: /dev/sde is busy - skipping

      mdadm --assemble --readonly /dev/md0 /dev/sdc /dev/sdd /dev/sde
      failed with the same busy errors.

      After the command
      dd if=/dev/zero of=/dev/sdb bs=4096 count=16
      16+0 records in
      16+0 records out
      65536 bytes (66 kB) copied, 0.751309 s, 87.2 kB/s

      lsblk
      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      sda 8:0 0 111.8G 0 disk
      ├─sda1 8:1 0 107.2G 0 part /
      ├─sda2 8:2 0 1K 0 part
      └─sda5 8:5 0 4.6G 0 part [SWAP]
      sdb 8:16 0 3.7T 0 disk
      └─sdb1 8:17 0 3.7T 0 part
      sdc 8:32 0 3.7T 0 disk
      sdd 8:48 0 3.7T 0 disk
      sde 8:64 0 3.7T 0 disk
      sr0 11:0 1 1024M 0 rom

      Drive SDB still looks the same.

      Now what?
      Thanks
      Intel Core i7-5820K Haswell 3.3 GHz - ASRock X99 Extreme4 - 8GB RAM - 4x4TB WD RED in RAID5 - 2TB Seagate 7200
      OMV Release: 1.9 Kralizec
    • See previous post.

      Found the command to stop md127
      mdadm --stop /dev/md127
      mdadm: stopped /dev/md127

      Ran the following commands
      mdadm --assemble /dev/md127 /dev/sdc /dev/sdd /dev/sde
      mdadm: /dev/md127 assembled from 3 drives - not enough to start the array while not clean - consider --force.

      Not sure I should try forced.
      Thanks
      Intel Core i7-5820K Haswell 3.3 GHz - ASRock X99 Extreme4 - 8GB RAM - 4x4TB WD RED in RAID5 - 2TB Seagate 7200
      OMV Release: 1.9 Kralizec
    • See previous 2 posts-

      mdadm --detail /dev/md127
      /dev/md127:
      Version : 1.2
      Creation Time : Thu Jun 9 20:20:07 2016
      Raid Level : raid5
      Used Dev Size : -1
      Raid Devices : 4
      Total Devices : 3
      Persistence : Superblock is persistent
      Update Time : Tue Nov 7 18:34:04 2017
      State : active, degraded, Not Started
      Active Devices : 3
      Working Devices : 3
      Failed Devices : 0
      Spare Devices : 0
      Layout : left-symmetric
      Chunk Size : 512K
      Name : OMV:Main
      UUID : bb43bcaf:acfbd176:41132760:a1f0006c
      Events : 80082
      Number Major Minor RaidDevice State
      0 8 32 0 active sync /dev/sdc
      1 8 48 1 active sync /dev/sdd
      2 8 64 2 active sync /dev/sde
      3 0 0 3 removed

      Now What?
      Thanks,
      Intel Core i7-5820K Haswell 3.3 GHz - ASRock X99 Extreme4 - 8GB RAM - 4x4TB WD RED in RAID5 - 2TB Seagate 7200
      OMV Release: 1.9 Kralizec
    • OK I did the --force and the drive show up in the web interface.
      Mounted the raid and I see all my files.
      Added the drive to the raid and it is recovering.
      Thanks for the help.
      Intel Core i7-5820K Haswell 3.3 GHz - ASRock X99 Extreme4 - 8GB RAM - 4x4TB WD RED in RAID5 - 2TB Seagate 7200
      OMV Release: 1.9 Kralizec