yeah .... working on it .... but it was the problem having two SATA cards and the raid was across the two cards. Have a 8 port sata card now and no more problems so far.
Posts by psolimando
-
-
OK (as it says Beginner) Just an update for some background. Looks like in order to have a boot drive and 4 raid drives I did something unwise and put the last of the 4 raid drives on a different SATA controller. That seems to be the drive that falls out all the time. Thanks to Amazon, I have an 8 port sata card coming today. Will see how that works out.
-
Yeah that is what I was afraid of, the one SATA port might be bad ..... I am not sure but I believe that sata port caused a drive problem once before. Looks like I may have to get more in-depth with this problem. This is an old motherboard I resurrected for my backup server.
May have to just go and get a new sata controller and use that instead of the on-board motherboard controller
-
Well OMV dropped the drive out of the RAID ..... So I took OMV down (Shutdown) and replaced the drive.
Brought the system back up. Wiped the new Drive and added it to the Raid. It started rebuilding the raid and took about 24 hours to do so. This morning the replacement drive was in the raid and the system looked like it was normal. Saw my Shared folders and my Docker was up, however none of my containers were there. Not a big deal cause I saved all my stuff and can rebuild it. All my dat was on the drive however it looked like the permissions were all screwed up. I did a reset permissions and most of it is back. Looks like I may do some major cleanup on the folders since this was a learning experience.
-
OK so I realized, for some reason it thinks that WD RED 2tb drive is no good. So I replaced it with a known good Toshiba 2tb. Tested the hell out of that WD drive and it comes up good every time ?!?!?! So now that my system took 24 hours to rebuild I see it has gone back to Portainer 1.24.X and my permissions are all screwed up ..... it has been such a pain in the @$$ lately that I have half a mind to start over and reinstall everything. Is there any way I can start over but keep all my users and passwords and folder structure information so I can make it all as it was before ? Is there a way to get a list of everything ? Can I save it somewhere ? THis way if I have to keep fighting with the software, at least I can start fresh this time with all of the Dockers and containers on on Portainer 2.0 cause right now it seems like there is a bunch of junk hanging around that is unseen but still causing problems ?
-
-
See attached document, the last time this happened we had a power hit, but unaware of any power problems this time.
-
FYI:
Drive finally rebuilt and everything is back up and all my files, and my containers are there and up and running .....
Thanks for the help!!!!
-
root@openmediavault:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdb[0] sde[3] sdd[2] sdc[1]
5860147200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
[>....................] recovery = 0.4% (8384532/1953382400) finish=894.0min speed=36257K/sec
bitmap: 8/15 pages [32KB], 65536KB chunk
unused devices: <none>
Ahhh there is the status, OK so it is rebuilding and it is only .4% done. OK I am going to leave this alone till tomorrow ...
Thanks again
-
root@openmediavault:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active (auto-read-only) raid5 sdb[0] sde[3] sdd[2] sdc[1]
5860147200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
bitmap: 8/15 pages [32KB], 65536KB chunk
So in reading this, I am assuming that this shows only 4 of 3 drives are there ?
If that is so and the rebuild is done then that drive does not look like it is back.
-
Looks like Drives 3 are online and one is rebuilding (not sure which one), not sure how long this will take . Is there a command to show the status of the rebuild?
Smart still says all my drives are good. I do notice that I now see the BackupDrive online but not all the data is available. I would like to confirm that the rebuild is done before I start trying to use this again and confirm all the Backup Data is back and available. (yes on the root thing , I noticed)
-
OK looks like it worked
mdadm: stopped /dev/md0
root@openmediavault:~# sudo mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcde]
mdadm: looking for devices for /dev/md0
mdadm: /dev/sdb is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sde is identified as a member of /dev/md0, slot 3.
mdadm: added /dev/sdc to /dev/md0 as 1
mdadm: added /dev/sdd to /dev/md0 as 2
mdadm: added /dev/sde to /dev/md0 as 3
mdadm: added /dev/sdb to /dev/md0 as 0
mdadm: /dev/md0 has been started with 3 drives (out of 4) and 1 rebuilding.
Yep it worked ......
-
# Degraded or missing raid array questions
# If you have a degraded or missing raid array, please post the following info (in code boxes) with your question.
#
# Login as root locally or via ssh - Windows users can use putty:
#
# cat /proc/mdstat
# blkid
# fdisk -l | grep "Disk "
# cat /etc/mdadm/mdadm.conf
# mdadm --detail --scan --verbose
# Post type of drives and quantity being used as well.
# Post what happened for the array to stop working? Reboot? Power loss?
See answers below
-----------------------------------------------------------------------------------------------------------------------------------------------------------
root@openmediavault:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sde[2] sdd[1] sdc[3] sdb[0]
7813529952 blocks super 1.2
unused devices: <none>
-----------------------------------------------------------------------------------------------------------------------------------------------------------
root@openmediavault:~# blkid
/dev/sda1: UUID="ed81636b-f347-4e63-9163-1d946ab96b1a" TYPE="ext4" PARTUUID="749029e3-01"
/dev/sda5: UUID="02919ac6-d8c4-446f-a2a3-e7db440fb77a" TYPE="swap" PARTUUID="749029e3-05"
/dev/sdb: UUID="990f19c7-dc6e-fc9e-944b-48e2efa66b33" UUID_SUB="b491f7f1-1fc0-74c2-6097-e4a88af469fe" LABEL="openmediavault:0" TYPE="linux_raid_member"
/dev/sdc: UUID="990f19c7-dc6e-fc9e-944b-48e2efa66b33" UUID_SUB="94395aa4-4643-daae-442e-0ec46456ccd2" LABEL="openmediavault:0" TYPE="linux_raid_member"
/dev/sdd: UUID="990f19c7-dc6e-fc9e-944b-48e2efa66b33" UUID_SUB="636e2d48-347c-7a77-a426-cd77051cd0ff" LABEL="openmediavault:0" TYPE="linux_raid_member"
/dev/sde: UUID="990f19c7-dc6e-fc9e-944b-48e2efa66b33" UUID_SUB="96e43a1c-d340-db25-4e66-5253ee435822" LABEL="openmediavault:0" TYPE="linux_raid_member"
-----------------------------------------------------------------------------------------------------------------------------------------------------------
root@openmediavault:~# fdisk -l | grep "Disk "
Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Disk model: PNY CS900 120GB
Disk identifier: 0x749029e3
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68E
Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68E
Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68E
Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EARX-00P
-----------------------------------------------------------------------------------------------------------------------------------------------------------
root@openmediavault:~# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=openmediavault:0 UUID=990f19c7:dc6efc9e:944b48e2:efa66b33
-----------------------------------------------------------------------------------------------------------------------------------------------------------
root@openmediavault:~# mdadm --detail --scan --verbose
INACTIVE-ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 spares=1 name=openmediavault:0 UUID=990f19c7:dc6efc9e:944b48e2:efa66b33
devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Post type of drives and quantity being used as well.
There are 3 Western Digital Red 2tb Drives
and 1 WD Green 2tb
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Post what happened for the array to stop working? Power loss /Power Hit up and down
-
root@openmediavault:/proc# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sde[2] sdd[1] sdc[3] sdb[0]
7813529952 blocks super 1.2
unused devices: <none>
root@openmediavault:/proc#
-