Hello everyone, first of all sorry because I'm a sucker and I have a little trouble speaking English.
I'm contacting you today because I just lost my raid 5, and I would like to recover my data (hoping that this is possible because I want my data)
I follow the process :
1.
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sde[3](S)
1953383512 blocks super 1.2
unused devices: <none>
2.
$ blkid
/dev/sda1: UUID="09bd2905-ba6c-4c43-a411-c747cb0fbbc1" TYPE="ext4" PARTUUID="735c8a35-01"
/dev/sda5: UUID="84bd6ebb-6353-4c0a-aeea-474b53d39f87" TYPE="swap" PARTUUID="735c8a35-05"
/dev/sde: UUID="07da8bcf-ca03-6b5c-a095-4d0de3eaeec5" UUID_SUB="b340e1a3-89d8-bcd4-d29d-4fa94d0744c1" LABEL="NasKaz:0" TYPE="linux_raid_member"
3.
$ fdisk -l | grep "Disk "
Disk /dev/sda: 57,3 GiB, 61492838400 bytes, 120103200 sectors
Disk identifier: 0x735c8a35Disk /dev/sde: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
4.
$ cat /etc/mdadm/mdadm.conf
# mdadm.conf## Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=NasKaz:0 UUID=07da8bcf:ca036b5c:a0954d0d:e3eaeec5
Alles anzeigen
5.
$ mdadm --detail --scan --verbose
INACTIVE-ARRAY /dev/md0 num-devices=1 metadata=1.2 name=NasKaz:0 UUID=07da8bcf:ca036b5c:a0954d0d:e3eaeec5
devices=/dev/sde
6.
My configuration :
I Have 5 hard drives :1 for the system (54 Gb), and 4 for the RAID and the capacity for each hard drive it's 1,82 Tb
I have configurate in RAID 5 with ext4 file system.
For more informations :
The last update with apt-get and the control panel webUI : 5 August 2019
Version OMV : 4.1.14-1
Since the control panel webUI :
In Storage menu :
When i go to DISKS i have :
a) /dev/sdb : 3,86 Gb (bad size)
b) /dev/sbc : 3,86 Gb (bad size)
c) /dev/sdd : 3,86 Gb (bad size)
d) /devsde : 1,82 Tb (good size)
When i go to S.M.A.R.T and DEVICES i have :
a) /dev/sdb : 3,86, Gb with température n/a and status Unknown
b) /dev/sbc : 3,86 Gb with température n/a and status Unknown
c) /dev/sdd : 3,86 Gb with température n/a and status Unknown
d) /devsde : 1,82 Tb with température 31°C and status Green
When i go to RAID Management i have :
nothing (before i have my raid (name : RAIDKAZ)
option recover is not possible, i just can clic on create, and i see only 3 disks (sdb, sdc, and sdd, because there have the same capacity 3,86 Gb and not 1,82 Tb)
When i go to FILE SYSTEMS i have :
Device : /dev/disk/by-label/RaidKaz in ext4
7.
For me i just turn off the computer and un turn on 4 months after... i don't understand
But after each reboot i loose a new hard drive, so i'm affraid to reboot...
If you have a solution or idea, you would save my life, because I really care about my data.
Thanking you in advance for the help you can give me.