After running a stable 2.x install for a number of years, I wanted to upgrade - different story though (failed)
So I started fresh with a 4.1.3-1 ISO install. Installed all found updates after initial install. Now upgraded to 4.1.5-1. No issues.
My understanding is that I should be able to continue using my previous RAID-1 drives with my shiny new 4.x install.
Upon initial configuration....I first plugged in one of the drives only. It was recognized and I proceeded to mount it and setup SMART.
After that went smoothly, I then plugged in the second drive. The drive is recognized, mounted and all....but now the problem starts.
In the RAID Mgt section, it shows the first drive I plugged in (/dev/sdb), but not the second (/dev/sdc). In the "Level" column, it list it as "Mirror". In the "State" column, it list it as "Clean,Degraded". Here are the "Details" for this first drive:
Alles anzeigenVersion : 1.2
Creation Time : Sat Aug 9 00:01:08 2014
Raid Level : raid1
Array Size : 2930131200 (2794.39 GiB 3000.45 GB)
Used Dev Size : 2930131200 (2794.39 GiB 3000.45 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Tue May 1 10:59:40 2018
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : OMV:Mirror (local to host OMV)
UUID : 14bf054c:921a3904:ceaf5959:c4e16499
Events : 318
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 16 1 active sync /dev/sdb
In the "File System" section, If I click the "Create" button, The 'Device' drop-down only gives me a choice of choosing 1 drive....which happens to be the second drive of the mirror(/dev/sdc). Not sure if this has any significance though.
root@OMV:~# mdadm --assemble --scan
mdadm: Found some drive for an array that is already active: /dev/md/Mirror
mdadm: giving up.
mdadm: No arrays found in config file or automatically
ZitatAlles anzeigenroot@OMV:~# blkid
/dev/sda1: UUID="61989a17-c31c-4283-bfd3-33b789e6b1d6" TYPE="ext4" PARTUUID="58427fb5-01"
/dev/sda5: UUID="a3b3d825-df03-476e-9f53-858ed34aa0a6" TYPE="swap" PARTUUID="58427fb5-05"
/dev/sdb: UUID="14bf054c-921a-3904-ceaf-5959c4e16499" UUID_SUB="0c33048f-c9c3-ed92-9162-4afb20a6d93e" LABEL="OMV:Mirror" TYPE="linux_raid_member"
/dev/md127: LABEL="OMVMIRROR" UUID="9c1c4cdf-0eaf-457b-845f-a4926e46e3f9" TYPE="ext4"
/dev/sdc: UUID="14bf054c-921a-3904-ceaf-5959c4e16499" UUID_SUB="80031803-8a37-1b32-8b5e-f628f7d1e499" LABEL="OMV:Mirror" TYPE="linux_raid_member"
ZitatAlles anzeigenroot@OMV:~# lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT
sda 111.8G disk
├─sda1 107.8G ext4 part /
├─sda2 1K part
└─sda5 4G swap part [SWAP]
sdb 2.7T linux_raid_member disk
└─md127 2.7T ext4 raid1 /srv/dev-disk-by-label-OMVMIRROR
sdc 2.7T linux_raid_member disk
ZitatAlles anzeigenroot@OMV:~# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sat Aug 9 00:01:08 2014
Raid Level : raid1
Array Size : 2930131200 (2794.39 GiB 3000.45 GB)
Used Dev Size : 2930131200 (2794.39 GiB 3000.45 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Tue May 1 14:21:36 2018
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : OMV:Mirror (local to host OMV)
UUID : 14bf054c:921a3904:ceaf5959:c4e16499
Events : 324
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 32 1 active sync /dev/sdc
Is there a particular procedure that I need to follow to get the mirror re-established?