Hi guys, i'm write from italy, i can't speak very well english :')
I made and use a raid 5 within 6 HD.. I went in holiday and when i comed back home to download my photo in my server i can't see any raid volume (Volume1 is missing in file system tab) and in RAID setting tabs i can't see any volume too.
All of my disks are good.
What can i do to recover all my data e restore raid volume?
I can't see my raid volume!
-
- OMV 4.x
- Robbie90
-
-
Assuming it is an mdadm RAID please answer this questions first: Degraded or missing raid array questions
-
Hi there, many thanks for the help. So, there are the answers:
root@vault:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sde[6](S) sda[0](S) sdf[5](S) sdb[1](S) sdd[3](S) sdc[2](S)
3339768912 blocks super 1.2
unused devices: <none>root@vault:~# blkid
/dev/sda: UUID="83d1b087-eb39-fbb0-da31-ac3dc315e594" UUID_SUB="f6304adf-c104-e999-285f-dd62ebe465cd" LABEL="vault:Volume" TYPE="linux_raid_member"
/dev/sdb: UUID="83d1b087-eb39-fbb0-da31-ac3dc315e594" UUID_SUB="060b9332-6a87-1acd-73c6-183672200e19" LABEL="vault:Volume" TYPE="linux_raid_member"
/dev/sdd: UUID="83d1b087-eb39-fbb0-da31-ac3dc315e594" UUID_SUB="743ef3a6-8055-bf2b-dceb-b290a64bf25c" LABEL="vault:Volume" TYPE="linux_raid_member"
/dev/sdc: UUID="83d1b087-eb39-fbb0-da31-ac3dc315e594" UUID_SUB="2ddd4215-bb16-cba1-b71f-363cd2aaaf00" LABEL="vault:Volume" TYPE="linux_raid_member"
/dev/sde: UUID="83d1b087-eb39-fbb0-da31-ac3dc315e594" UUID_SUB="3d235ec6-4422-5aa3-795a-78d234c378de" LABEL="vault:Volume" TYPE="linux_raid_member"
/dev/sdf: UUID="83d1b087-eb39-fbb0-da31-ac3dc315e594" UUID_SUB="bf89bf78-2b6b-b495-b3e8-f34121b6edba" LABEL="vault:Volume" TYPE="linux_raid_member"
/dev/sdg1: UUID="606a425b-637a-49a4-b2e6-5d6a3836982a" TYPE="ext4" PARTUUID="fbf07d3b-01"
/dev/sdg5: UUID="107f71c3-3005-44c5-ad28-d6ea953e856a" TYPE="swap" PARTUUID="fbf07d3b-05"root@vault:~# fdisk -l | grep "Disk"
Disk /dev/sda: 465,8 GiB, 500107862016 bytes, 976773168 sectors
Disk /dev/sdb: 596,2 GiB, 640135028736 bytes, 1250263728 sectors
Disk /dev/sdd: 465,8 GiB, 500107862016 bytes, 976773168 sectors
Disk /dev/sdc: 596,2 GiB, 640135028736 bytes, 1250263728 sectors
Disk /dev/sde: 465,8 GiB, 500107862016 bytes, 976773168 sectors
Disk /dev/sdf: 596,2 GiB, 640135028736 bytes, 1250263728 sectors
Disk /dev/sdg: 14,3 GiB, 15376000000 bytes, 30031250 sectors
Disklabel type: dos
Disk identifier: 0xfbf07d3broot@vault:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
## by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes# automatically tag new arrays as belonging to the local system
HOMEHOST <system># definitions of existing MD arrays
ARRAY /dev/md127 metadata=1.2 spares=1 name=vault:Volume UUID=83d1b087:eb39fbb0:da31ac3d:c315e594root@vault:~# mdadm --detail --scan --verbose
INACTIVE-ARRAY /dev/md127 num-devices=6 metadata=1.2 name=vault:Volume UUID=83d1b087:eb39fbb0:da31ac3d:c315e594
devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf -
These are what disk i am usign for raid, and the raid tab without any field.
-
Ok, once again an inactive array. Maybe this thread is helpful: HELP PLEASE - After Upgrade from 3.099 to 4x system crashed. RAID gone
-
very helpful. I restar my array and i have a degraded RAID. What happened ? The system was shutted down for a while.
-
I'm just about to sign off but what's the output now of cat /proc/mdstat and mdadm --detail --scan --verbose also looking at the image of your drives they are different sizes, not a good idea.
-
md127 : active raid5 sde[6] sda[0] sdc[2] sdf[7] sdb[1] sdd[3]
2441277440 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
bitmap: 0/4 pages [0KB], 65536KB chunkARRAY /dev/md127 level=raid5 num-devices=6 metadata=1.2 name=vault:Volume UUID=83d1b087:eb39fbb0:da31ac3d:c315e594
devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdfThat are the results of that command. I konw that isn't a good idea to use a different disks, but i made this "server" just to use every spare disk that i alredy had :')
Now i saw that i have one disk "red" in s.m.a.r.t. tab. I formatted that disk e repair the raid , but i have to change it
-
Now i saw that i have one disk "red" in s.m.a.r.t. tab.
Yes by the look of the output it's /dev/sdf
The output of mdadm -D /dev/md127 should give you detailed information on the array.
-
Yes by the look of the output it's /dev/sdf
How can you say that?
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!