Hello all,
Fist, I'm still a nubie so please bare with me if I seem slow understanding what you are saying.
The other day we had some pretty intense storms come through my area so I shut down the server in hopes of preventing issues with it. My server is hooked to a APU and there were no power surges nor were there any brown-outs or a power loss. After the storms passed, I started up the server and noticed PLEX was having issues. I figured it was something minor and went to bed with hopes to get to it later. When I finally did get back to it, the Plex server couldn't be found. Thinking this was still a Plex issue I did some searching and found that that if there was a Plex update that could cause the issue. So I re-installed Plex. I then went to populate my movies, I noticed all my movies etc were missing. Still thinking this was a Plex issue I broke out my backup and started to load my movies from the back up. Basically hooked up the USB device and used windows to copy from the storage device to the movie folders on the server...until I got to all the kids movies. I then got the error "Not enough space." I was like WTF no way I've used 10 TB of space, I then went back to OMV to look for errors and noticed my raid 5 was gone.
I continued to look at the settings and config in OMV and noticed that I'm missing a disk. This was not my first thought because I have a 3 disk RAID 5, with 2 disks being less than 6 months old and the third being 5 weeks old. Here is my set up:
I have a Ryzen 5400G with 16 GB ram. I have a HP 250GB SSD for the OS and 3 Seagate Iron Wolf 6 TB NAS hard drives. My system has been running flawlessly for the last 5 weeks.
I've done a scan to see if the third disk can be found in the system and I'm thinking the drive is done because OMV can't seem to find it (I'm not sure which one yet because I haven't opened up the case).
I looked in the forum and didn't seen anything that jumped out at me as being my issue, except maybe one, but It was talk way over my understanding.
Here is the info I was asked for when posting in here, I hope I got it correct.
cat /proc/mdstat:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sda[2](S) sdb[0](S)
11720780976 blocks super 1.2
unused devices: <none>
blkid:
/dev/nvme0n1p1: UUID="4D7E-AFDB" TYPE="vfat" PARTUUID="39b2b515-93f3-417c-aaf8-62bc5bfce827"
/dev/nvme0n1p2: UUID="589a4e54-30cc-483c-a069-f479010686b2" TYPE="ext4" PARTUUID="e34b005d-3a3d-4f4d-851a-cb2a2816edb4"
/dev/nvme0n1p3: UUID="e01bd811-71c5-42eb-9a14-229187f9ccbf" TYPE="swap" PARTUUID="5b2055fd-f856-4e8b-886c-e37a4b802a89"
/dev/sda: UUID="390a22b5-0416-a783-cf33-e740ba8db73d" UUID_SUB="e6b611ee-6b6b-59e5-21ca-5b28378805d8" LABEL="paqhomeserv:Storage" TYPE="linux_raid_member"
/dev/sdb: UUID="390a22b5-0416-a783-cf33-e740ba8db73d" UUID_SUB="ed06f92e-798c-a6cb-5e45-e80869a6c8b7" LABEL="paqhomeserv:Storage" TYPE="linux_raid_member"
/dev/nvme0n1: PTUUID="8bd003a3-695f-4080-be23-2b4ee8f1d601" PTTYPE="gpt"
fdisk -l | grep "Disk ":
Disk /dev/nvme0n1: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Disk model: HP SSD EX900 250GB
Disk identifier: 8BD003A3-695F-4080-BE23-2B4EE8F1D601
Disk /dev/sda: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0033-2EE
Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0033-2EE
cat /etc/mdadm/mdadm.conf:
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=paqhomeserv:Storage UUID=390a22b5:0416a783:cf33e740:ba8db73d
mdadm --detail --scan --verbose:
INACTIVE-ARRAY /dev/md0 num-devices=2 metadata=1.2 name=paqhomeserv:Storage UUID=390a22b5:0416a783:cf33e740:ba8db73d
devices=/dev/sda,/dev/sdb
Alles anzeigen
I am not sure what all that means, I've read through some of it, and if I understand it correctly it looks like I'm only showing the SSD and 2 of the 3 NAS drives and no Raid.
So my question is what are my next steps? I know I will need to pull the crapped drive out and insert another, but will that reinstate my Raid 5? I'm curious why the Raid doesn't show "degraded" and allow me to access my information. Which is the way I understood Raid 5 to work with one dead disk. So is all my information lost?
I did have some stuff which I hadn't backed up if it's lost oh well (but my daughter will be ticked off since it was her pix).
Your help is greatly appreciated, so thank you in advance.