Hi all,
It's been awhile since I have been on here. Which I think it a good thing since my server has been running really well up until about 4-5 days ago.
I started to receive errors on a drive from SMART. The drive in question is a 3TB drive in a RAID 5 with 4 other drives (5 in total). I purchased a new drive to replace it and on top of that an additional larger drive 10TB to backup everything from the RAID to it before I replace the bad drive. At this point the drive is still showing that its good and the RAID is showing as clean with all 5 drives.
Today I started to setup an rsync to copy all the data from the RAID to the 10TB drive using an external enclosure USB3. I didn't even start transfering the files and was just setting up the rsync that I received an email saying:
Status failed Service mountpoint_srv_dev-disk-by-label-Main
Date: Wed, 20 Jan 2021 21:54:45
Action: alert
Host: Morpheus-SAWHOME
Description: status failed (1) -- /srv/dev-disk-by-label-Main is not a mountpoint
Your faithful employee,
Monit
I took a look at the RAID and it's now showing as degraded/clean. One of the drives are offline. So okay the drive failed. But usually when a drive fails the RAID still works in degraded mode and it would remain mounted. On OMV it's listed in the filesystem, but it's not mounted. I tried to mount it and I get this error
Error #0:
OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mount -v --source '/dev/disk/by-label/Main' 2>&1' with exit code '32': mount: /dev/md127: can't read superblock in /usr/share/php/openmediavault/system/process.inc:182
Stack trace:
#0 /usr/share/php/openmediavault/system/filesystem/filesystem.inc(733): OMV\System\Process->execute()
#1 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(920): OMV\System\Filesystem\Filesystem->mount()
#2 [internal function]: OMVRpcServiceFileSystemMgmt->mount(Array, Array)
#3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
#4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('mount', Array, Array)
#5 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'mount', Array, Array, 1)
#6 {main}
Here's the output of mdadm --detail /dev/md127
Version : 1.2
Creation Time : Tue Nov 18 11:35:57 2014
Raid Level : raid5
Array Size : 11720540160 (11177.58 GiB 12001.83 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 5
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Jan 20 22:04:13 2021
State : clean, degraded
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : SAWHOME-Vault:RAID5
UUID : bea4cede:6989d555:c0037b76:5d3428cb
Events : 75409
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 32 1 active sync /dev/sdc
6 8 96 2 active sync /dev/sdg
4 8 48 3 active sync /dev/sdd
5 8 0 4 active sync /dev/sda
Alles anzeigen
1 drive is removed (the failed one).
Any idea how I can mount it and get it up and online so I can backup the files before I rebuild the RAID with the new 3TB drive?
Would mdadm --assemble --force --verbose /dev/md127 /dev/sd[cadg] work or destroy the current RAID, because the 5th drive is missing?
Just trying to save the data if possible.
Thanks in advance!