Ok...let me start by saying that although I have been using linux for more than a decade, I have learned just enough to make me dangerous.
I love OMV and have been using it for a media storage server for a couple years now.
I had a 8 x 1TB RAID 6 array until this past weekend (May 10 & 11) that was functioning just fine. 5.46 TB formatted with EXT4 over LVM.
I say had because over this past weekend I attempted to grow the array by 2 x 1TB drives.
During the grow process something has happened to cause the whole array to disappear from the OMV web interface.I know the array is still there because when I ssh in and run some commands, it's there but not active. As I have read many different posts the last few hours, I thought I should ask more direct questions because, being an idiot, I don't have a backup and am hoping I can get even a degraded array up at least to copy/backup the data so I don't loose it. The array only had about 1.5tb of data on the 5.46tb array.
Like i said...just enough to be dangerous. anyway, here goes:
root@nas:~# blkid
/dev/sdb: UUID="69098bc2-b0c2-7cc8-9738-854452bf6a73" LABEL="nas:md0" TYPE="linux_raid_member"
/dev/sdc: UUID="69098bc2-b0c2-7cc8-9738-854452bf6a73" LABEL="nas:md0" TYPE="linux_raid_member"
/dev/sdd: UUID="69098bc2-b0c2-7cc8-9738-854452bf6a73" LABEL="nas:md0" TYPE="linux_raid_member"
/dev/sdh: UUID="69098bc2-b0c2-7cc8-9738-854452bf6a73" LABEL="nas:md0" TYPE="linux_raid_member"
/dev/sdi: UUID="69098bc2-b0c2-7cc8-9738-854452bf6a73" LABEL="nas:md0" TYPE="linux_raid_member"
/dev/sdj: UUID="69098bc2-b0c2-7cc8-9738-854452bf6a73" LABEL="nas:md0" TYPE="linux_raid_member"
/dev/sde: UUID="69098bc2-b0c2-7cc8-9738-854452bf6a73" LABEL="nas:md0" TYPE="linux_raid_member"
/dev/sdf: UUID="69098bc2-b0c2-7cc8-9738-854452bf6a73" LABEL="nas:md0" TYPE="linux_raid_member"
/dev/sdg: UUID="69098bc2-b0c2-7cc8-9738-854452bf6a73" LABEL="nas:md0" TYPE="linux_raid_member"
/dev/sda1: UUID="8cfe43c6-2e52-46da-a0b4-7e2d48bf88cb" TYPE="ext4"
/dev/sda5: UUID="31b14485-dc8d-46a2-aad5-f5401c9a0f52" TYPE="swap"
/dev/sdk: UUID="69098bc2-b0c2-7cc8-9738-854452bf6a73" LABEL="nas:md0" TYPE="linux_raid_member"
Alles anzeigen
One drive developed issues during the grow.
There is no md127. SMART shows sdf to have 230 errors which developed during the grow with eminent failure.
md127 : inactive sdf[8](S)
976761560 blocks super 1.2
md0 : inactive sdb[0] sdg[9] sdk[7] sdj[6] sdh[5] sdi[4] sde[3] sdd[2] sdc[1]
8790854040 blocks super 1.2
The UUID of the array seems to have changed. I believe OMV generates the media directory entry from the UUID which is different in fstab.
root@nas:~# mdadm --examine --scan
ARRAY /dev/md/md0 metadata=1.2 UUID=69098bc2:b0c27cc8:97388544:52bf6a73 name=nas:md0
oot@nas:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Jun 7 16:53:33 2013
Raid Level : raid6
Used Dev Size : 976761344 (931.51 GiB 1000.20 GB)
Raid Devices : 10
Total Devices : 9
Persistence : Superblock is persistent
Update Time : Mon May 12 14:34:28 2014
State : active, degraded, Not Started
Active Devices : 9
Working Devices : 9
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : nas:md0 (local to host nas)
UUID : 69098bc2:b0c27cc8:97388544:52bf6a73
Events : 18797
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 64 3 active sync /dev/sde
4 8 128 4 active sync /dev/sdi
5 8 112 5 active sync /dev/sdh
6 8 144 6 active sync /dev/sdj
7 8 160 7 active sync /dev/sdk
9 8 96 8 active sync /dev/sdg
9 0 0 9 removed
Alles anzeigen
fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
# / was on /dev/sda1 during installation
UUID=8cfe43c6-2e52-46da-a0b4-7e2d48bf88cb / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=31b14485-dc8d-46a2-aad5-f5401c9a0f52 none swap sw 0 0
/dev/sdb1 /media/usb0 auto rw,user,noauto 0 0
tmpfs /tmp tmpfs defaults 0 0
# >>> [openmediavault]
UUID=0a0aab68-348b-4bcb-8646-7fc4ff364c2c /media/0a0aab68-348b-4bcb-8646-7fc4ff364c2c ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
/media/0a0aab68-348b-4bcb-8646-7fc4ff364c2c/Movies/ /export/Movies none bind 0 0
/media/0a0aab68-348b-4bcb-8646-7fc4ff364c2c/Music/ /export/Music none bind 0 0
/media/0a0aab68-348b-4bcb-8646-7fc4ff364c2c/Backups/ /export/Backups none bind 0 0
# <<< [openmediavault]
Alles anzeigen
If anyone can point me in the right direction, I'd greatly appreciate it. And I promise, from this day forward I will do a back up. WHY? because raid (even raid6) is no substitues for proper backups. lesson learned.