Hi, Is it possible to remount a stripe raid within omv?
I changed some hardware on my nas, reinstalled omv, now it looks like I need to rebuild my raid, I thought it would of been a case of just remounting it.
Any ideas?
Cheers
Stripe Raid
-
- OMV 3.x
- scouseman
-
-
Normally that is the case. cat /proc/mdstat will tell you more.
-
Normally that is the case. cat /proc/mdstat will tell you more.
Thanks for your reply, the output from cat /proc/mdstat is:
Personalities : [linear]
unused devices: <none> -
It isn't finding anything. What is the output of: mdadm --assemble --scan
-
Hi, thanks for your continued help, this is very frustrating, I never realised I was going to cause myself so much trouble from updating my system.
Here is the output:
mdadm: /dev/md/Storage assembled from 1 drive - not enough to start the array.
mdadm: No arrays found in config file or automatically -
What is the output of:
cat /proc/mdstat
fdisk -l | grep "Disk "
blkid -
Hi, Here are the outputs you requested
blkid
/dev/sdb: UUID="56736c28-5fa9-c36e-0779-b651b442ba26" UUID_SUB="b15e50ba-d330-ae24-caad-267df7d0be80" LABEL="openmediavault:Storage" TYPE="linux_raid_member"
/dev/sda1: UUID="2dc0c5bd-59ff-45f5-84fb-27d402ce24e2" TYPE="ext4" PARTUUID="07ff860a-01"
/dev/sda5: UUID="0eea45bd-6c52-4e33-a7aa-5fae5099485d" TYPE="swap" PARTUUID="07ff860a-05"
fdisk -l | grep "Disk "
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Disk identifier: 0x07ff860a
cat /proc/mdstat
Personalities : [linear]
unused devices: <none>Cheers
Mark -
Did you check your cables? Maybe the drive failed?
-
The drive appears in the OMV physical drive section and I can add the drive to a windows pc and scan it using data recovery software. It is definitely working fine.
I guess that you think I will have to start again
-
fdisk only shows one data drive. You see two data drives in OMV web interface? Post a pic.
-
The raid was only built with one data drive the Toshiba one, I created a stripe raid as in intended to expand the raid in the next couple of months. It was my intention to back up the raid on an external hard drive but I have not got around to setting that up yet.
https://s22.postimg.org/7lbu97dtd/drives.pngCheers
Mark -
Ah, you never said you built a degraded array. mdadm --assemble --verbose --force /dev/md127 /dev/sdb might start it.
-
Hi, Sorry I never realised it was important.
I run your line and and got the following response, is it usually so difficult to remount a degraded array? Would you advise against this sort of setup in the future, I was just looking to build a raid setup which I could add to in the future, without causing loads of problemsmdadm: looking for devices for /dev/m127
mdadm: cannot open device /dev/sdbmdadm: No such file or directory
mdadm: /dev/sdbmdadm has no superblock - assembly abortedCheers
-
Looks like you have a typo. Did you type /dev/sdbmdadm at the end of the command or /dev/sdb?
-
Hi, Sorry this is going on for so long.
mdadm --assemble --verbose --force /dev/m127 /dev/sdb
mdadm: looking for devices for /dev/m127
mdadm: /dev/m127 is an invalid name for an md device. Try /dev/md/m127So I tried
mdadm --assemble --verbose --force /dev/md/m127 /dev/sdb
mdadm --assemble --verbose --force /dev/md/m127 /dev/sdb
mdadm: looking for devices for /dev/md/m127
mdadm: /dev/sdb is identified as a member of /dev/md/m127, slot 1.
mdadm: no uptodate device for slot 0 of /dev/md/m127
mdadm: added /dev/sdb to /dev/md/m127 as 1
mdadm: /dev/md/m127 assembled from 1 drive - not enough to start the array. -
Typo on my part (should be /dev/md127) but it wouldn't change the outcome of the suggested fix.
I assume this was a raid 1 array? If it was part of a raid 0 array, I didn't even think you could do that.
-
I do wish I could delete this thread and start again I have made a complete mess of it. Please accept my apologies for wasting your time, it is very embarrassing this thread.
I found there was 2 drives after all on the raid not sure what made me think there was only one, you are correct it is not possible to use 1 drive
cat /proc/mdstat:
Personalities : [raid0] [linear]
md127 : inactive sdb[0](S)
976631512 blocks super 1.2fdisk -l | grep "Disk "
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Disk identifier: 0x07ff860a
Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk identifier: A36F5CFC-A3BB-4DD2-9BCE-84198569BD05blkid
/dev/sda1: UUID="2dc0c5bd-59ff-45f5-84fb-27d402ce24e2" TYPE="ext4" PARTUUID="07ff860a-01"
/dev/sda5: UUID="0eea45bd-6c52-4e33-a7aa-5fae5099485d" TYPE="swap" PARTUUID="07ff860a-05"
/dev/sdc: UUID="56736c28-5fa9-c36e-0779-b651b442ba26" UUID_SUB="b15e50ba-d330-ae24-caad-267df7d0be80" LABEL="openmediavault:Storage" TYPE="linux_raid_member"
/dev/sdb: UUID="56c23aef-a986-b046-918a-0e5721dce091" UUID_SUB="c4a613eb-e343-289a-13a5-343d34954190" LABEL="openmediavault:0" TYPE="linux_raid_member"mdadm --assemble --verbose --force /dev/md127 /dev/sdb
mdadm: looking for devices for /dev/md127
mdadm: /dev/sdb is busy - skipping -
Don't worry about it.
mdadm --stop /dev/md127
mdadm --assemble --verbose --force /dev/md127 /dev/sd[bc] -
Hi, Thanks for your continued patience on this, is it usually this hard?
mdadm --stop /dev/md127
mdadm: error opening /dev/md127: No such file or directorymdadm --assemble --verbose --force /dev/md127 /dev/sd[bc]
mdadm: looking for devices for /dev/md127
mdadm: superblock on /dev/sdc doesn't match others - assembly aborted -
is it usually this hard?
Yep. I have no idea why mdadm has issues on some systems but none on others. Was this raid 0?
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!