It was raid5 with 4 disks ( 3tb-3tb-1tb-1tb) But it looks like 3 of those were broken.
I did a raid recovery to a company and they repaired the disks and i was be able to get the data.
thanks for your time.
It was raid5 with 4 disks ( 3tb-3tb-1tb-1tb) But it looks like 3 of those were broken.
I did a raid recovery to a company and they repaired the disks and i was be able to get the data.
thanks for your time.
i have a raid which is inactive and i cant start it.
The Story: i noticed that some files were missing from my OMV NAS then i noticed a strange noice ( the HDD loop noise) in a disk.
We had some power losses some days ago and propably the disk failed
I bought a new 3TB WD disk to replace it but i cannot do it through UI. I cannot see the inactive raid
root@omv:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : inactive sdk[2](S) sdf[0](S)
3906764976 blocks super 1.2
md1 : active raid5 sdj[2] sdl[3] sdg[1] sdc[0]
1464763392 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/4 pages [0KB], 65536KB chunk
md2 : active raid5 sdh[4] sdd[1] sde[2] sdb[0]
11720658432 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices: <none>
Display More
root@omv:~# blkid
/dev/sda1: UUID="2c28f16c-814a-4140-8bc9-5806ba12972f" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="1d14ebc7-01"
/dev/sda5: UUID="67a267ad-7fa2-435b-9ccf-7019200c73c6" TYPE="swap" PARTUUID="1d14ebc7-05"
/dev/sdb: UUID="51ba1bbb-8efc-31e5-ec43-d7e941d7c49d" UUID_SUB="3ea315ed-7249-f784-030a-b76f4b5d4c3d" LABEL="omv:raid2" TYPE="linux_raid_member"
/dev/sdc: UUID="b2ccf659-68a2-869a-5397-135e8bfa8312" UUID_SUB="ac11443f-8ff7-b4ae-6533-acf322ffe675" LABEL="omv:vault" TYPE="linux_raid_member"
/dev/sdd: UUID="51ba1bbb-8efc-31e5-ec43-d7e941d7c49d" UUID_SUB="6828b078-45ca-77f8-399b-2c4b5cd03d5d" LABEL="omv:raid2" TYPE="linux_raid_member"
/dev/sde: UUID="51ba1bbb-8efc-31e5-ec43-d7e941d7c49d" UUID_SUB="e380fe06-4d1a-40e9-ea71-5555ab0334a8" LABEL="omv:raid2" TYPE="linux_raid_member"
/dev/sdf: UUID="febf00b0-e4c9-c38a-5ce7-80dbc8f992e6" UUID_SUB="ef06ed62-c84f-ec1d-4adc-0d2dc741627d" LABEL="omv:box" TYPE="linux_raid_member"
/dev/md2: LABEL="truck" UUID="26424451-0a6e-4095-b72d-0e0b795cfc57" BLOCK_SIZE="4096" TYPE="ext4"
/dev/md1: LABEL="vault" UUID="cbc748ff-62d7-4afe-87e7-728b064ed051" BLOCK_SIZE="4096" TYPE="ext4"
/dev/sdk: UUID="febf00b0-e4c9-c38a-5ce7-80dbc8f992e6" UUID_SUB="4895ed31-aa5e-2b13-cd44-a212b5da2c08" LABEL="omv:box" TYPE="linux_raid_member"
/dev/sdg: UUID="b2ccf659-68a2-869a-5397-135e8bfa8312" UUID_SUB="b40e2d6c-ed15-c3e7-b9b0-1936f81658fd" LABEL="omv:vault" TYPE="linux_raid_member"
/dev/sdj: UUID="b2ccf659-68a2-869a-5397-135e8bfa8312" UUID_SUB="06b389be-8206-5039-9c3a-490378a4830d" LABEL="omv:vault" TYPE="linux_raid_member"
/dev/sdh: UUID="51ba1bbb-8efc-31e5-ec43-d7e941d7c49d" UUID_SUB="2572f686-d742-75de-e825-8c4dfd8bc646" LABEL="omv:raid2" TYPE="linux_raid_member"
/dev/sdl: UUID="b2ccf659-68a2-869a-5397-135e8bfa8312" UUID_SUB="f3a7ca92-4cc0-0c7d-b481-fca84de07edb" LABEL="omv:vault" TYPE="linux_raid_member"
Display More
root@omv:~# fdisk -l | grep "Disk "
Disk /dev/sda: 111.79 GiB, 120034123776 bytes, 234441648 sectors
Disk model: SanDisk SDSSDA12
Disk identifier: 0x1d14ebc7
Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFAX-68J
Disk /dev/sdc: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: TOSHIBA DT01ACA0
Disk /dev/sdd: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFAX-68J
Disk /dev/sde: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFAX-68J
Disk /dev/sdf: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30PURX-64P
Disk /dev/md2: 10.92 TiB, 12001954234368 bytes, 23441316864 sectors
Disk /dev/md1: 1.36 TiB, 1499917713408 bytes, 2929526784 sectors
Disk /dev/sdi: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFAX-68J
Disk /dev/sdk: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EURX-73F
Disk /dev/sdg: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: WDC WD5000AAKX-0
Disk /dev/sdj: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: TOSHIBA DT01ACA0
Disk /dev/sdh: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFAX-68J
Disk /dev/sdl: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: WDC WD5000AAKX-0
Display More
root@omv:~# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md2 metadata=1.2 name=omv:raid2 UUID=51ba1bbb:8efc31e5:ec43d7e9:41d7c49d
ARRAY /dev/md1 metadata=1.2 name=omv:vault UUID=b2ccf659:68a2869a:5397135e:8bfa8312
ARRAY /dev/md0 metadata=1.2 name=omv:box UUID=febf00b0:e4c9c38a:5ce780db:c8f992e6
MAILADDR root
Display More
root@omv:~# mdadm --detail --scan --verbose
ARRAY /dev/md2 level=raid5 num-devices=4 metadata=1.2 name=omv:raid2 UUID=51ba1bbb:8efc31e5:ec43d7e9:41d7c49d
devices=/dev/sdb,/dev/sdd,/dev/sde,/dev/sdh
ARRAY /dev/md1 level=raid5 num-devices=4 metadata=1.2 name=omv:vault UUID=b2ccf659:68a2869a:5397135e:8bfa8312
devices=/dev/sdc,/dev/sdg,/dev/sdj,/dev/sdl
INACTIVE-ARRAY /dev/md0 num-devices=2 metadata=1.2 name=omv:box UUID=febf00b0:e4c9c38a:5ce780db:c8f992e6
devices=/dev/sdf,/dev/sdk
I bought a new 3TB to replace the broken one.
I tried to add the new HDD from ssh but i couldnt
The new disk is the sdi.
i am 99% that the raid is raid5.
is there any way to fix it ? ( i can buy more disks if need it.)
Thanks