Clean, degraded Raid (Mirror)
-
- OMV 3.x
- e36Alex
-
-
-
Code
root@HomeServer:~# cat /proc/mdstat Personalities : [raid1] md127 : active raid1 sda[2] 2930135488 blocks super 1.2 [2/1] [U_] bitmap: 7/22 pages [28KB], 65536KB chunk unused devices: <none>
Coderoot@HomeServer:~# blkid /dev/sdb: UUID="76dcfdac-bd3d-e29e-c9a2-a008d06b251d" UUID_SUB="3fdbfc64-a1a2-6024-44a3-dcf95aaded65" LABEL="HomeServer:main" TYPE="linux_raid_member" /dev/sda: UUID="76dcfdac-bd3d-e29e-c9a2-a008d06b251d" UUID_SUB="6ebf0f24-e82c-7bbc-0db3-5807a939515f" LABEL="HomeServer:main" TYPE="linux_raid_member" /dev/sdc1: LABEL="appdata" UUID="24b4e7e3-6459-4dc7-8916-36cd214ef156" TYPE="ext4" PARTUUID="9d15bdde-f79c-4031-829d-a37f87df2838" /dev/md127: LABEL="data" UUID="a7a73b76-b236-4932-896e-de4dce65e609" TYPE="ext4" /dev/sdd1: UUID="54f75c86-2b9e-4357-9734-a328dd1b2d8e" TYPE="ext4" PARTUUID="29f7a350-01" /dev/sdd5: UUID="aef35032-413d-4ca3-a431-4c8503d86784" TYPE="swap" PARTUUID="29f7a350-05"
Coderoot@HomeServer:~# fdisk -l | grep "Disk " Disk /dev/sdb: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors Disk /dev/sda: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors Disk /dev/sdc: 465,8 GiB, 500107862016 bytes, 976773168 sectors Disk identifier: 299B3748-BD6B-448D-A68E-878B393378A4 Disk /dev/md127: 2,7 TiB, 3000458739712 bytes, 5860270976 sectors Disk /dev/sdd: 14,4 GiB, 15502147584 bytes, 30277632 sectors Disk identifier: 0x29f7a350
Code
Alles anzeigenroot@HomeServer:~# cat /etc/mdadm/mdadm.conf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays ARRAY /dev/md127 metadata=1.2 name=HomeServer:main UUID=76dcfdac:bd3de29e:c9a2a008:d06b251d # instruct the monitoring daemon where to send mail alerts
-
-
Try the following and post any error reports you get:
mdadm --assemble /dev/md127 /dev/sda /dev/sdb
-
-
We first need to stop the array:
mdadm /dev/md127 --stop
mdadm --assemble /dev/md127 /dev/sda /dev/sdb -
-
Code
root@HomeServer:~# mdadm /dev/md127 --stop mdadm: No action given for /dev/md127 in --misc mode Action options must come before device names root@HomeServer:~# mdadm --assemble /dev/md127 /dev/sda /dev/sdb mdadm: /dev/sda is busy - skipping mdadm: Found some drive for an array that is already active: /dev/md127 mdadm: giving up.
thanks for your help!
-
Sh.. "Action options must come before device names"
mdadm --stop /dev/md127
mdadm --assemble /dev/md127 /dev/sda /dev/sdb -
Code
root@HomeServer:~# mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? root@HomeServer:~# mdadm --assemble /dev/md127 /dev/sda /dev/sdb mdadm: /dev/sda is busy - skipping mdadm: Found some drive for an array that is already active: /dev/md127 mdadm: giving up.
sorry, the command line isnt my friend
-
-
Sorry. Not sure how to proceed.
Maybe somebody else can jump in?
-
The array is degraded but still functioning. So, it is most likely mounted. You could try unmounting it then stopping and reassembling the array.
-
-
-
Unmount from the command line - umount /dev/md127 - if it is use, this may fail though.
-
-
now i rebooted the server. after that i could unmount the raid with the command line. but i still cant restore the second hdd - even after mounting the raid again.
-
-
but i dont know, what is running on it.
lsof | grep /media/a7a73b76-b236-4932-896e-de4dce65e609
-
now i rebooted the server. after that i could unmount the raid with the command line. but i still cant restore the second hdd - even after mounting the raid again.
What does "can't restore the second hdd" mean? Whenever doing an mdadm command, posting the output of cat /proc/mdstat if very helpful.
-
-
-
What does "can't restore the second hdd" mean? Whenever doing an mdadm command, posting the output of cat /proc/mdstat if very helpful.
-
I just noticed there was no force flag in the assemble command. I would do the following:
mdadm --stop /dev/md127
mdadm --assemble --force --verbose /dev/md127 /dev/sd[ab]
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!