What are the next steps?
Will the raid stop?
What are the next steps?
Will the raid stop?
what's the output of blkid
Have you got anything running like plex, emby minidlna something requires access to the array.
what's the output of blkid
blkid
/dev/sdb: UUID="3ca82093-31bd-aba6-e3fc-2da78924c663" UUID_SUB="6b249649-ad3c-9a 85-6dda-5b49a53af0ae" LABEL="Zangs-NAS:Zangs" TYPE="linux_raid_member"
/dev/md127: LABEL="Raid" UUID="b090d450-8a3c-4b13-845c-b1876a8d7174" TYPE="ext4"
/dev/sda: UUID="3ca82093-31bd-aba6-e3fc-2da78924c663" UUID_SUB="931edf09-fb4e-0c 6c-18d0-3503a69a0410" LABEL="Zangs-NAS:Zangs" TYPE="linux_raid_member"
/dev/sdc1: UUID="E620-721A" TYPE="vfat" PARTUUID="9d5b7707-65db-433f-9b6a-c2cbb7 9b5dd8"
/dev/sdc2: UUID="0ce16ee2-727d-4ac2-8e28-13fbf48155ab" TYPE="ext4" PARTUUID="5cf cfa16-fe17-4afb-9424-954b3708bc37"
/dev/sdc3: UUID="ec420058-796e-406f-962b-66e7cae4fd39" TYPE="swap" PARTUUID="bab 0249f-4d80-404f-85a8-6a10f9139157"
Have you got anything running like plex, emby minidlna something requires access to the array.
Yes, plex, nfs, smb, rsync-server.
Stop plex, then try to stop the array. The output from the above shows sda and sdb as your raid array drives.
Stop plex, then try to stop the array. The output from the above shows sda and sdb as your raid array drives.
plex stopped. Raid does not stop.
I still think this related to plex, something has access to that array that's why it won't stop, you've stop the service, in the GUI Diagnostics -> System Information is there anything in there that points to Plex
aaahhh!! that shows nothing, ok another option lsof /dev/md127 found that searching so hope it returns something.
aaahhh!! that shows nothing, ok another option lsof /dev/md127 found that searching so hope it returns something.
lsof /dev/md127
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
smbd 2229 root cwd DIR 9,127 4096 109838337 /srv/dev-disk-by-label-R aid/temp
smbd 2229 root 9r DIR 9,127 4096 46923777 /srv/dev-disk-by-label-R aid/Video
smbd 2229 root 32r DIR 9,127 4096 46923777 /srv/dev-disk-by-label-R aid/Video
smbd 2229 root 33r DIR 9,127 4096 57147393 /srv/dev-disk-by-label-R aid/AlexZ
smbd 2229 root 34r DIR 9,127 4096 57147393 /srv/dev-disk-by-label-R aid/AlexZ
smbd 2229 root 36r DIR 9,127 4096 35784202 /srv/dev-disk-by-label-R aid/Bilder/alex-desktop/Bilder/2018-05-13
smbd 2229 root 37r DIR 9,127 4096 35784202 /srv/dev-disk-by-label-R aid/Bilder/alex-desktop/Bilder/2018-05-13
That shows nothing, reading through notes there is one option I have never ever tried nor needed too umount -l /dev/md127
umount -l /dev/md127
mdadm --stop /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?
I really am at a loss, none of this is making sense, the only logical option is to reboot and see if it comes back up clean.
strange
nothing has changed after a reboot
Let's continue tomorrow
cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdb[2]
1953383512 blocks super 1.2 [2/1] [U_]
bitmap: 10/15 pages [40KB], 65536KB chunk
unused devices: <none>
Let's continue tomorrow
TBH I don't know where to start, it's not recognising sda and adding it back to the array, the raid will not stop, there is no information as to what has access to the array preventing it from stopping, this appears (and I mean appears) to be some sort of corruption somewhere.
The only other option I can think of is the systemrescuecd option in OMV-Extras -> Kernel tab, but I'll have to search to find out if it can be done from there.
The only other option I can think of is the systemrescuecd option in OMV-Extras -> Kernel tab, but I'll have to search to find out if it can be done from there.
I still know an "option". Pull the hard disk out of the computer. Then the raid is automatically set to "inactive".
Then it can be stopped. Afterwards the RAID should start normally
mdadm --assemble /dev/md127 /dev/sd[ab]
Other suggestions?
Other suggestions?
Possibly, and it's worth a try a suggestion from someone else, turn off/disable everything, docker, smb, nfs, rsync server, whatever is running, then reboot, this will kill any PID's running, and technically start OMV with just the raid running.
Possibly, and it's worth a try a suggestion from someone else, turn off/disable everything, docker, smb, nfs, rsync server, whatever is running, then reboot, this will kill any PID's running, and technically start OMV with just the raid running.
I had already tried that. Nothing helped.
What I had before did not work either.
Now I have restarted the computer with both old disks. Stand after the restart was:
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdc[2]
1953383512 blocks super 1.2 [2/1] [U_]
bitmap: 10/15 pages [40KB], 65536KB chunk
unused devices: <none>
added the second old disk
the sync actually took less than a minute to complete
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdb[1] sdc[2]
1953383512 blocks super 1.2 [2/1] [U_]
[=>...................] recovery = 5.7% (113283200/1953383512) finish=163.9min speed=187093K/sec
bitmap: 10/15 pages [40KB], 65536KB chunk
unused devices: <none>
current status is
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdb[1] sdc[2]
1953383512 blocks super 1.2 [2/2] [UU]
bitmap: 0/15 pages [0KB], 65536KB chunk
unused devices: <none>
can we do everything about cli now? At least the error message is visible there!
Ok, the drive references have changed they are now /dev/sd[bc]
Post the output blkid and mdadm --detail --scan --verbose /dev/md127
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!