Stephen from freedatarecovery.us replied to one of my threads on another forum, and was able to restore the array with no data loss. Definitely recommend his service if anyone is ever in a similar situation and everything else had failed.
Beiträge von sammm00
-
-
Code
root@nas:~# mdadm -A /dev/md127 -f --update=summaries /dev/sd[abcdef] mdadm: --update=summaries not understood for 1.x metadata root@nas:~# cat /etc/mdadm.conf DEVICE /dev/sda DEVICE /dev/sdb DEVICE /dev/sdc DEVICE /dev/sdd DEVICE /dev/sde ARRAY /dev/md/Storage metadata=1.2 UUID=b4812ae8:804e182d:f813a5e1:e9ee6da4 name=nas:Storage
Thanks again -
Hi WastlJ, thanks for taking a look.
I had forgotten while trying to debug last night I had removed the new hard drive I was attempting to grow with, here's the above commands ran with both the new drive connected and disconnected.
Disconnected:
Code
Alles anzeigenroot@nas:~# mdadm --assemble --verbose --invalid-backup --force /dev/md127 /dev/sd[abcde] mdadm: looking for devices for /dev/md127 mdadm: /dev/sda is identified as a member of /dev/md127, slot 0. mdadm: /dev/sdb is identified as a member of /dev/md127, slot 1. mdadm: /dev/sdc is identified as a member of /dev/md127, slot 3. mdadm: /dev/sdd is identified as a member of /dev/md127, slot 4. mdadm: /dev/sde is identified as a member of /dev/md127, slot 2. mdadm: :/dev/md127 has an active reshape - checking if critical section needs to be restored mdadm: Failed to find backup of critical section mdadm: continuing without restoring backup mdadm: added /dev/sdb to /dev/md127 as 1 mdadm: added /dev/sde to /dev/md127 as 2 mdadm: added /dev/sdc to /dev/md127 as 3 mdadm: added /dev/sdd to /dev/md127 as 4 mdadm: added /dev/sda to /dev/md127 as 0 mdadm: failed to RUN_ARRAY /dev/md127: Invalid argument root@nas:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md127 : inactive sda[0] sdd[4] sdc[3] sde[5] sdb[1] 14651327800 blocks super 1.2 unused devices: <none> root@nas:~# mdadm -D /dev/md127 /dev/md127: Version : 1.2 Creation Time : Mon Aug 4 02:03:59 2014 Raid Level : raid6 Used Dev Size : -1 Raid Devices : 6 Total Devices : 5 Persistence : Superblock is persistent Update Time : Mon Jan 18 00:27:39 2016 State : active, degraded, Not Started Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Delta Devices : 1, (5->6) Name : nas:Storage (local to host nas) UUID : b4812ae8:804e182d:f813a5e1:e9ee6da4 Events : 551 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb 5 8 64 2 active sync /dev/sde 3 8 32 3 active sync /dev/sdc 4 8 48 4 active sync /dev/sdd 10 0 0 10 removed
Connected (with the new/grow drive being /dev/sdc):
Code
Alles anzeigenroot@nas:~# mdadm --assemble --verbose --invalid-backup --force /dev/md127 /dev/sd[abcdef] mdadm: looking for devices for /dev/md127 mdadm: /dev/sda is identified as a member of /dev/md127, slot 0. mdadm: /dev/sdb is identified as a member of /dev/md127, slot 1. mdadm: /dev/sdc is identified as a member of /dev/md127, slot -1. mdadm: /dev/sdd is identified as a member of /dev/md127, slot 3. mdadm: /dev/sde is identified as a member of /dev/md127, slot 4. mdadm: /dev/sdf is identified as a member of /dev/md127, slot 2. mdadm: :/dev/md127 has an active reshape - checking if critical section needs to be restored mdadm: No backup metadata on device-6 mdadm: Failed to find backup of critical section mdadm: continuing without restoring backup mdadm: added /dev/sdb to /dev/md127 as 1 mdadm: added /dev/sdf to /dev/md127 as 2 mdadm: added /dev/sdd to /dev/md127 as 3 mdadm: added /dev/sde to /dev/md127 as 4 mdadm: no uptodate device for slot 10 of /dev/md127 mdadm: added /dev/sdc to /dev/md127 as -1 mdadm: added /dev/sda to /dev/md127 as 0 mdadm: failed to RUN_ARRAY /dev/md127: Invalid argument root@nas:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md127 : inactive sde[4](S) sdc[6](S) sda[0](S) 8790796680 blocks super 1.2 unused devices: <none> root@nas:~# mdadm -D /dev/md127 /dev/md127: Version : 1.2 Raid Level : raid0 Total Devices : 3 Persistence : Superblock is persistent State : inactive Delta Devices : 1, (-1->0) New Level : raid6 New Layout : left-symmetric New Chunksize : 512K Name : nas:Storage (local to host nas) UUID : b4812ae8:804e182d:f813a5e1:e9ee6da4 Events : 551 Number Major Minor RaidDevice - 8 0 - /dev/sda - 8 32 - /dev/sdc - 8 64 - /dev/sde
-
This isn't directly related to OMV, but I was really hoping someone here could help me out. I originally created this array with OMV 0.5, however I needed to upgrade to the latest Debian as I was having issues with my network adapter (which had known issues with Debian 6). After installing Debian 8 and reinitialising the array everything was going smoothly until I tried to grow the array. I'd be very grateful if anyone had any ideas.
My RAID 6 array consisted of 5x 3TB drives, and I tried to grow it with another 3TB drive, however after 12 hours the reshape didn't appear to be making any progress; reshape = 0.0% (0/2930265088). Unfortunately at some point I suffered power loss and upon powering on the array wouldn't start. After googling I've read that the first moments of a reshape are critical, however as the reshape seemed to be stuck I'm hoping there's a way to recover from this? I also read that you can define a back up file, unfortunately the guide I was following for the reshape didn't mention this, so I don't have one. All 5 are reporting clean and correct checksums.
At the moment I don't care about trying to grow the array, I'd just like to try and recover data if possible.
Here's the output of mdadm --examine for each of the old drives: http://pastebin.com/raw/DRQdjWef (had to pastebin because post was too long)
Output of mdadm -D /dev/md127
Code
Alles anzeigen/dev/md127: Version : 1.2 Raid Level : raid0 Total Devices : 5 Persistence : Superblock is persistent State : inactive Delta Devices : 1, (-1->0) New Level : raid6 New Layout : left-symmetric New Chunksize : 512K Name : nas:Storage (local to host nas) UUID : b4812ae8:804e182d:f813a5e1:e9ee6da4 Events : 551 Number Major Minor RaidDevice - 8 0 - /dev/sda - 8 16 - /dev/sdb - 8 32 - /dev/sdc - 8 48 - /dev/sdd - 8 64 - /dev/sde
When trying to force assemble:Coderoot@nas:~# mdadm --assemble --force /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde mdadm: Failed to restore critical section for reshape, sorry. Possibly you needed to specify the --backup-file
And with "--invalid-backup"Code
Alles anzeigenroot@nas:~# mdadm --assemble --force /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde --invalid-backup mdadm: failed to RUN_ARRAY /dev/md127: Invalid argument root@nas:~# mdadm -D /dev/md127 /dev/md127: Version : 1.2 Creation Time : Mon Aug 4 02:03:59 2014 Raid Level : raid6 Used Dev Size : -1 Raid Devices : 6 Total Devices : 5 Persistence : Superblock is persistent Update Time : Mon Jan 18 00:27:39 2016 State : active, degraded, Not Started Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Delta Devices : 1, (5->6) Name : nas:Storage (local to host nas) UUID : b4812ae8:804e182d:f813a5e1:e9ee6da4 Events : 551 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb 5 8 64 2 active sync /dev/sde 3 8 32 3 active sync /dev/sdc 4 8 48 4 active sync /dev/sdd 10 0 0 10 removed
And finally /proc/mdstat & blkid & fdisk -lCode
Alles anzeigenroot@nas:~# blkid /dev/sdb: UUID="b4812ae8-804e-182d-f813-a5e1e9ee6da4" UUID_SUB="f38040a7-8e6d-340e-ac38-6daa80f03141" LABEL="nas:Storage" TYPE="linux_raid_member" /dev/sda: UUID="b4812ae8-804e-182d-f813-a5e1e9ee6da4" UUID_SUB="f8863d07-23d5-d4af-dd1d-2f40e366d96e" LABEL="nas:Storage" TYPE="linux_raid_member" /dev/sdf1: UUID="a252af3e-145b-4b37-b3ec-1dd30e951496" TYPE="ext4" PARTUUID="000c2b42-01" /dev/sdf5: UUID="82fd5e40-67ad-47e7-a056-b2848217ebf5" TYPE="swap" PARTUUID="000c2b42-05" /dev/sdc: UUID="b4812ae8-804e-182d-f813-a5e1e9ee6da4" UUID_SUB="d829a9b5-ec6f-0b86-06d4-06bee508fc22" LABEL="nas:Storage" TYPE="linux_raid_member" /dev/sde: UUID="b4812ae8-804e-182d-f813-a5e1e9ee6da4" UUID_SUB="e89405bf-0fa4-5530-4d4b-d5663af77917" LABEL="nas:Storage" TYPE="linux_raid_member" /dev/sdd: UUID="b4812ae8-804e-182d-f813-a5e1e9ee6da4" UUID_SUB="0fbc03cd-4dfb-176e-e547-4a86d7752374" LABEL="nas:Storage" TYPE="linux_raid_member" root@nas:~# fdisk -l Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdf: 37.3 GiB, 40020664320 bytes, 78165360 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x000c2b42 Device Boot Start End Sectors Size Id Type /dev/sdf1 * 2048 74897407 74895360 35.7G 83 Linux /dev/sdf2 74899454 78163967 3264514 1.6G 5 Extended /dev/sdf5 74899456 78163967 3264512 1.6G 82 Linux swap / Solaris Partition 3 does not start on physical sector boundary. Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes root@nas:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md127 : inactive sda[0] sdd[4] sdc[3] sde[5] sdb[1] 14651327800 blocks super 1.2 unused devices: <none>
Thanks very much!