Your syntax looks wrong for the mirror array. What is the output of cat /proc/mdstat and blkid now?
Recreate RAID10 after OS disk failure
-
- OMV 1.0
- ecastellani
-
-
Zitat
cat /proc/mdstat
Personalities : [raid10] [raid1] [raid0]
unused devices: <none>and
Zitatblkid
/dev/sda1: UUID="c17401d3-1d95-42d5-acc4-e0e64cdf0927" TYPE="ext4"
/dev/sda5: UUID="0eb849ea-8a2b-4f6d-8618-a9a38cddcc9b" TYPE="swap"
/dev/sdb1: UUID="0c0bc765-b7aa-4532-98eb-d4cb83d21b0e" TYPE="ext4"
/dev/sdb5: UUID="f163d7c5-e228-4306-9f37-09bceb734ba1" TYPE="swap"but I suppose I have to assemble BEFORE the stripe (at least one) and then the mirror..
well, I would be happy to succeed just with the first stripe array, so I could mount the FS and backup all..
-
I would try assembling both stripes.
-
First:
Zitatroot@nasino:~# mdadm --assemble /dev/md127 /dev/sdb /dev/sdc --verbose --force
mdadm: looking for devices for /dev/md127
mdadm: Cannot assemble mbr metadata on /dev/sdb
mdadm: /dev/sdb has no superblock - assembly abortedSecond:
Zitatroot@nasino:~# mdadm --assemble /dev/md127 /dev/sdd /dev/sde --verbose --force
mdadm: looking for devices for /dev/md127
mdadm: no recogniseable superblock on /dev/sdd
mdadm: /dev/sdd has no superblock - assembly abortedehmm.. ?
-
I guess you array is linear. modprobe linear and try again.
-
no news, only linear added to Personalities:
cat /proc/mdstat
Personalities : [raid10] [raid1] [raid0] [linear]
unused devices: <none>(of course I tried all assemble again)
-
Maybe try a different combination of drives?? Not sure. Not looking good when it says it isn't finding a superblock though.
-
tried all combinations pair, single, three..
the refrain is
Zitatno recogniseable superblock
..
-
Hi!
Are the additional disks directly attached to the system or are they external?
After booting, if you type lsmod | grep -raid are the modules displayed? If no, then you may need to make sure that they are built into your ramdisk. Run update-initramfs -u -k all; then reboot the machine.
-
I just performed a search for your hardware, so forget my question about drive locations.
Can you get me the output of mdadm --examine /dev/sd[abcde]* >> mdadm_examine_ecastellani.txt and also mdadm --examine /dev/sd[apcde]* | egrep 'Event|/dev/sd' >> mdadm_examine_event_ecastellani.txt?
-
after a reboot the lsmod |grep raid gives nothing
so I run:
Zitat~# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-3.2.0-4-amd64
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
W: mdadm: no arrays defined in configuration file.the examine:
mdadm --examine /dev/sd[abcde]*
/dev/sda:
MBR Magic : aa55
Partition[0] : 3890780160 sectors at 2048 (type 83)
Partition[1] : 16244738 sectors at 3890784254 (type 05)
mdadm: No md superblock detected on /dev/sda1.
/dev/sda2:
MBR Magic : aa55
Partition[0] : 16244736 sectors at 2 (type 82)
mdadm: No md superblock detected on /dev/sda5.
mdadm: No md superblock detected on /dev/sdb.
mdadm: No md superblock detected on /dev/sdc.
mdadm: No md superblock detected on /dev/sdd.
/dev/sde:
MBR Magic : aa55
Partition[0] : 960524288 sectors at 2048 (type 83)
Partition[1] : 16242690 sectors at 960528382 (type 05)
mdadm: No md superblock detected on /dev/sde1.
/dev/sde2:
MBR Magic : aa55
Partition[0] : 16242688 sectors at 2 (type 82)
mdadm: No md superblock detected on /dev/sde5.and
# mdadm --examine /dev/sd[apcde]* | egrep 'Event|/dev/sd'
mdadm: No md superblock detected on /dev/sda1.
mdadm: No md superblock detected on /dev/sda5.
mdadm: No md superblock detected on /dev/sdc.
mdadm: No md superblock detected on /dev/sdd.
mdadm: No md superblock detected on /dev/sde1.
mdadm: No md superblock detected on /dev/sde5.
/dev/sda:
/dev/sda2:
/dev/sde:
/dev/sde2:I reboot again but no change in lsmod or examine..
-
I forgot to mention that the modules should be loaded before you run the update-initramfs command.
This is the result of my examine command:
/dev/sda:
MBR Magic : aa55
Partition[0] : 497664 sectors at 2048 (type 83)
Partition[1] : 62029826 sectors at 501758 (type 05)
/dev/sda2:
MBR Magic : aa55
Partition[0] : 62029824 sectors at 2 (type 83)
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : cfb1a241:b1f4a374:e84097c2:3f55fa82
Name : openmediavault:mdomv01 (local to host openmediavault)
Creation Time : Fri Apr 24 00:37:06 2015
Raid Level : raid10
Raid Devices : 4Avail Dev Size : 1250001584 (596.05 GiB 640.00 GB)
Array Size : 1250000896 (1192.09 GiB 1280.00 GB)
Used Dev Size : 1250000896 (596.05 GiB 640.00 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 5c3de8f7:6be5630e:3c300ddc:867f3d67Update Time : Fri Apr 24 14:03:23 2015
Checksum : 5eaf156e - correct
Events : 19Layout : near=2
Chunk Size : 512KDevice Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : cfb1a241:b1f4a374:e84097c2:3f55fa82
Name : openmediavault:mdomv01 (local to host openmediavault)
Creation Time : Fri Apr 24 00:37:06 2015
Raid Level : raid10
Raid Devices : 4Avail Dev Size : 1250001584 (596.05 GiB 640.00 GB)
Array Size : 1250000896 (1192.09 GiB 1280.00 GB)
Used Dev Size : 1250000896 (596.05 GiB 640.00 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 05e1ff5b:fcf4d640:a970417d:e8fe62f7Update Time : Fri Apr 24 14:03:23 2015
Checksum : 26938878 - correct
Events : 19Layout : near=2
Chunk Size : 512KDevice Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : cfb1a241:b1f4a374:e84097c2:3f55fa82
Name : openmediavault:mdomv01 (local to host openmediavault)
Creation Time : Fri Apr 24 00:37:06 2015
Raid Level : raid10
Raid Devices : 4Avail Dev Size : 1250001584 (596.05 GiB 640.00 GB)
Array Size : 1250000896 (1192.09 GiB 1280.00 GB)
Used Dev Size : 1250000896 (596.05 GiB 640.00 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : d7af56db:13b36800:b4306fc6:ed074843Update Time : Fri Apr 24 14:03:23 2015
Checksum : fa8ede71 - correct
Events : 19Layout : near=2
Chunk Size : 512KDevice Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing)
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : cfb1a241:b1f4a374:e84097c2:3f55fa82
Name : openmediavault:mdomv01 (local to host openmediavault)
Creation Time : Fri Apr 24 00:37:06 2015
Raid Level : raid10
Raid Devices : 4Avail Dev Size : 1250001584 (596.05 GiB 640.00 GB)
Array Size : 1250000896 (1192.09 GiB 1280.00 GB)
Used Dev Size : 1250000896 (596.05 GiB 640.00 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : dd19ae9b:51b2b129:c9de9d70:2fee926aUpdate Time : Fri Apr 24 14:03:23 2015
Checksum : b5a8dc0d - correct
Events : 19Layout : near=2
Chunk Size : 512KDevice Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing)and for my events check:
mdadm --examine /dev/sd[abcde]* | egrep -i 'Event|/dev/sd'
mdadm: No md superblock detected on /dev/sda1.
mdadm: No md superblock detected on /dev/sda5.
/dev/sda:
/dev/sda2:
/dev/sdb:
Events : 19
/dev/sdc:
Events : 19
/dev/sdd:
Events : 19
/dev/sde:
Events : 19Also as you may have noticed, your drives don't have any RAID metadata so will probably not be combined in an array since there isn't any information on how to reassemble them. Had there been metadata then the event would have shown you if the difference between data writes had the integrity of the array been altered.
Correct me if I'm wrong, but is /dev/sde your new boot disk? If from the RAID failure you didn't now shuffle the physical order of the drive then you should turn off or remove the members of the RAID array and make sure that your boot/OS disk is /dev/sda; then make the drives available. If the /dev/sda allocation doesn't persists then you should make it persistent via UDEV.
I have to go out but will be back later in the day.
-
well, I loaded modules (modprobe raid0, modprobe raid1, modprobe linear) but the update command didn't show differences.
and the examine is exactly the same than before..
the boot disk is the sda, the 500Gb hard drive
the HP ML150 has one bay for CDROM/HardDrive that I use for the sda and four bays for the RAID hard drives.
The only change I made was to remove di usb drive that (according to a friend) should had work fine because I don't use intensively the NAS (once/twice per week, always in stand-by mode, just to keep safe the phioto archive created for lightroom).. well, the usb pen is almost dead..
So I added the new hard disk and simply reinstalled the OMV.. -
ops, I correct the previous statement: the boot disk SHOULD be or USED to be sda, now appear to be sde..
ZitatIf the /dev/sda allocation doesn't persists then you should make it persistent via UDEV.
well, I understand the meaning, but i'm not able to do it myself..
-
First what I understand from all the things you wrote:
You do not have a raid 10.
IT looks like you have 2 Raid 1s (one for ext4 and one for swap). (whatever might be the reason to put swap on the raid ... ).
My best advise would be:
- Power down your system
- unplug all raid drives
- power on with only boot device attached
- reinstall OMV
- retry to see your raid.
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!