umount / dev / md127
what's that, did I say umount the array -> NO
This first option is to remove a drive using the GUI.
umount / dev / md127
what's that, did I say umount the array -> NO
This first option is to remove a drive using the GUI.
Sorry, I did not express myself clearly. I am following your instructions.
Of course I tried it without unmounting. The error message was the same.
Everything back?
What's the output of cat /proc/mdstat
What's the output of cat /proc/mdstat
The error code implies something is being used;
mdadm --detail /dev/md127
The error code implies something is being used;
mdadm --detail /dev/md127
mdadm --detail /dev/md127
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jul 30 20:16:52 2019
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : Zangs-NAS:Zangs
UUID : 3ca82093:31bdaba6:e3fc2da7:8924c663
Events : 14734
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 16 1 active sync /dev/sdb
/dev/md127:
Version : 1.2
Creation Time : Sat Nov 11 17:45:46 2017
Raid Level : raid1
Array Size : 1953383512 (1862.89 GiB 2000.26 GB)
Used Dev Size : 1953383512 (1862.89 GiB 2000.26 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jul 30 20:16:52 2019
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : Zangs-NAS:Zangs
UUID : 3ca82093:31bdaba6:e3fc2da7:8924c663
Events : 14734
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 16 1 active sync /dev/sdb
Alles anzeigen
Ok that tells us that a drive has been removed and sdb is working it shows the raid as clean degraded, when you ran --stop before wa this after a reboot?
... when you ran --stop before wa this after a reboot?
I have not rebooted yet.
I have not rebooted yet.
I know but when you ran --stop here was this after a reboot -> I think it was.
I know but when you ran --stop here was this after a reboot -> I think it was.
yes, after a reboot. But also only after the new disk was involved in raid. Then the RAID was then inactive and could be stopped.
I suspect that when removing a DISK from RAID via GUI something went wrong and a service is still trying to access RAID.
yes, after a reboot. But also only after the new disk was involved in raid. Then the RAID was then inactive and could be stopped.
Ok your data is still there on /dev/sdb.
The error implies that something is accessing the array or OMV doesn't like something in the config, I assume you have added the new drive and wiped it, make a note of it's drive reference /dev/sd[?] and reboot, once back up check cat /proc/mdstat then try and stop the array.
At present I am at a loss as to why the array will stop now because it should.
Unfortunately, the RAID could not be stopped even after a reboot...
I did another reboot with another old disk. Now the raid is gone.
mdadm --detail /dev/md127 /dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent
State : inactive
Name : Zangs-NAS:Zangs
UUID : 3ca82093:31bdaba6:e3fc2da7:8924c663
Events : 14721
Number Major Minor RaidDevice
- 8 0 - /dev/sda
Alles anzeigen
Now I can stop the raid
mdadm --stop /dev/md127
mdadm: stopped /dev/md127
Now I cann start the RAID
mdadm --assemble --run /dev/md127 /dev/sda
mdadm: /dev/md127 has been started with 1 drive (out of 2).
current status is:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sda[2]
1953383512 blocks super 1.2 [2/1] [U_]
bitmap: 4/15 pages [16KB], 65536KB chunk
unused devices: <none>
Next step would be to stop the RAID, right? Can we continue tomorrow?
Ok your last post has disappeared
Ok your last post has disappeared
It will come again. I have edited the post too often. Now it has to be checked by a moderator.
Thank you so much!
Until tomorrow
Until tomorrow
Ok
Ok we need to go back to square one, I am making no sense of the above, this should all be simple and straightforward, what I don't want to happen is for us to get frustrated. What would help is for me make to take some output details copy and paste then to word doc, this will help me go back and review. To do that you'll need to put the two original Raid drives back in and post the output of the first 5 options from here
If this continues to fail then it has to be something related to the hardware, which BTW I haven't asked about perhaps some detail on that might help.
My apologies for this, but it should just work
Hello,
I like to join in and am grateful that you help me. There is no reason to apologize.
It just got late yesterday and I was tired.
As far as I understand I have to go back to the original RAID. That put old plates in it and two commands that do not cause sync again:
mdadm --stop /dev/md127
mdadm --assemble --run /dev/md127 /dev/sda /dev/sdb
Assuming the disks sda and sdb are the RAID disks.
Right? I make step by step ...
As far as I understand I have to go back to the original RAID.
Yes, you should be able to shutdown add the removed drive back, then reassemble, mdadm --assemble /dev/md127 /dev/sd[ab] the Raid should come back up clean, only do that if it doesn't come back up clean.
OK, I have the old disk installed and restarted.
Status after restart:
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdb[2]
1953383512 blocks super 1.2 [2/1] [U_]
bitmap: 10/15 pages [40KB], 65536KB chunk
unused devices: <none>
then command
mdadm --assemble /dev/md127 /dev/sd[ab]
and the error code comes
mdadm: /dev/sdb is busy - skipping
mdadm: Found some drive for an array that is already active: /dev/md/Zangs
mdadm: giving up.
I think I have to stop the RAID first. What are the next steps?
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!