There is something wrong with /dev/sdb. You could try wiping and then retry the previous commands to rebuild again.
ok this will take a while. i report ehrn its done
There is something wrong with /dev/sdb. You could try wiping and then retry the previous commands to rebuild again.
ok this will take a while. i report ehrn its done
Ok im done here are the results.
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[5] sda[0] sde[3] sdd[2] sdc[1]
11720540160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
unused devices: <none>
blkid
/dev/sdc: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="2aff9e51-39ad-3bf5-9d39-417d5400e7e6" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
/dev/sdd: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="a3aa903a-b1ed-b4b1-eb90-69c49e3106f4" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
/dev/sda: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="58952015-7d8d-9eba-5a15-18e897b8003b" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
/dev/md0: LABEL="Daten" UUID="1fed2e7a-967b-4473-877f-11a947f88b38" TYPE="ext4"
/dev/sde: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="ecc83b31-85a5-9802-c0f5-b3482e7c137d" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
/dev/sdf1: UUID="e18308d2-eea9-4a61-aba0-b86b84025eab" TYPE="ext4"
/dev/sdf5: UUID="473d9cc6-c276-4122-8fcd-b12ad6475a14" TYPE="swap"
/dev/sdb: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="efd229da-9b93-aa17-902b-4b2ca712f73b" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
fdisk -l
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdf: 32.0 GB, 32017047552 bytes
255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00079103
Device Boot Start End Blocks Id System
/dev/sdf1 * 2048 59895807 29946880 83 Linux
/dev/sdf2 59897854 62531583 1316865 5 Extended
/dev/sdf5 59897856 62531583 1316864 82 Linux swap / Solaris
Disk /dev/sde: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sda: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sda doesn't contain a valid partition table
Disk /dev/md0: 12001.8 GB, 12001833123840 bytes
2 heads, 4 sectors/track, -1364832256 cylinders, total 23441080320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 2097152 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 spares=1 name=NAS-OMV:meinRAID5 UUID=32d3f50c:d25c4482:7835bf0b:90cd4649
# instruct the monitoring daemon where to send mail alerts
MAILADDR ********
MAILFROM root
mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid5 num-devices=5 metadata=1.2 name=NAS-OMV:meinRAID5 UUID=32d3f50c:d25c4482:7835bf0b:90cd4649
devices=/dev/sda,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdb
so far as my knowledge goes i see no errors.
Looks good to me.
Ok at 8 a.m. we had an Power failure. i looked at the raid and none of the hard drive is missing but i got this mail from the system. I think my Problem is solved
This is an automatically generated mail message from mdadmrunning on NAS-OMV
A SparesMissing event had been detected on md device /dev/md0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[5] sda[0] sde[3] sdd[2] sdc[1]
1720540160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
unused devices: <none>
can this solve my Problem?
can this solve my Problem?
Hi ryecoaaron
Thanks for responding to the thread, I figured it would likey be mdadm something. I am wondering if there is a way to stop OMV from starting the array on boot so i can rescan and start the array manually without it starting with a failed disk.
Thanks!
Hi,
I am reading this thread just to learn how thing works.
As far I have understood you have wiped the missing drive and then added again to the array ? Have I missed some chapters ?
Regards,
Giuseppe Chillemi
Once mine kicked the disk out it was in a strange state like that and i had to wipe and re-add yes.
...And the drive has gone out of the array after a power loss ?
Regards,
Giuseppe Chillemi
Power loss or restart, I popped the disk out scanned it for errors popped it back in and the system wouldnt add it back to the raid. Formated it and then it was good to go.
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!