What is the output of: mdadm -A /dev/md127
RAID 6 gone, physical drives visible
-
- gelöst
- ahab666
-
-
-
-
Code
Alles anzeigen/dev/md127: Version : 1.2 Creation Time : Sun Sep 21 17:22:12 2014 Raid Level : raid6 Used Dev Size : -1 Raid Devices : 12 Total Devices : 11 Persistence : Superblock is persistent Update Time : Thu Aug 13 13:50:43 2015 State : active, degraded, Not Started Active Devices : 11 Working Devices : 11 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : OMV:OMV (local to host OMV) UUID : 6230a09b:2bd2f0af:b6f72e19:46e3b8b3 Events : 41017 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 4 8 96 4 active sync /dev/sdg 5 8 80 5 active sync /dev/sdf 6 8 112 6 active sync /dev/sdh 7 0 0 7 removed 12 8 128 8 active sync /dev/sdi 9 8 144 9 active sync /dev/sdj 10 8 160 10 active sync /dev/sdk 11 8 176 11 active sync /dev/sdl --------------------- # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays
thx alex
-
any other ides folks
-
-
@ahab666 I have no idea how to help you any further as my knowledge with a software raid config in OMV is very limited.
Perhaps I may suggest you 'if you can recover your data' to use your LSI hardware raid and create virtual disks if you want more then one virtual disk.
I have many servers running with LSI cards and in my opinion there is nothing better then hardware raids.
I know it is an old fashion idea, specially with zfs, but in your particular case I would definitely go for doing all in your LSI.Any questions about how to do this with your LSI raid controller I am happy to assist with.
Kind regards,
-
I would have a look at this disk as it seems the troublemaker, hang it in a different computer and check the SMART.
If it was mine and I noticed something not quite right I would do a DBAN zero wipe with verify after each sector write.
Then check SMART again, to see if it re-allocated sectors, then at least you know more about the disk.
But please do understand i have no clue about software raid, but in the LSI hardware raid a DBAN and putting it back would do a rebuild.
Perhaps someone could confirm it works same way in a software raid with OMV?mdadm: added /dev/sdj to /dev/md127 as 7 (possibly out of date)
I noticed it was coming back your raid and tried to rebuild, perhaps there is something really wrong with that disk and OMV failed to rebuild it.
-
Code
Alles anzeigenroot@OMV:~# cat /etc/mdadm/mdadm.conf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays ARRAY /dev/md/OMV metadata=1.2 UUID=6230a09b:2bd2f0af:b6f72e19:46e3b8b3 name=OMV:OMV root@OMV:~# cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=6ac66484-42b3-48ad-8430-072852de03ab / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=6edd6d3a-11b9-4188-848c-0c2f2c9a73fa none swap sw 0 0 /dev/sdb1 /media/usb0 auto rw,user,noauto 0 0 # >>> [openmediavault] # <<< [openmediavault]
and thx - alex
-
then let out the level parameter:
mdadm --assemble /dev/md127 /dev/sd[bcdefghijklm] --verbose --forceThat was the command you used and the raid came back, but you must check that disk first!
-
-
Alex, I scrolled through your last post, and after looking again [was confusing as you pasted all in go] i noticed your raid is in the mdam config file.
/etc/mdadm/mdadm.conf
ARRAY /dev/md/OMV metadata=1.2 UUID=6230a09b:2bd2f0af:b6f72e19:46e3b8b3 name=OMV:OMVand a reboot should do the trick then.
-
-
-
-
-
I think that's the left over from the point his raid was recovered but crashed!
The mdadm/kernel setup could create the default mdadm.conf file with --name=NAS.
When a --name parameter is set, a random (seems to always be 127) md device is created that actually symlinks to /dev/md/NAS
That are 2 differences I think, but it depends on his configuration, I think he has called his one simple OMV.After your commands he should also run the update-initramfs -u
But I warned him to check the disk first as that seems to be the culprit to me.
As he was waiting for days I thought I jumped into, but if you prefer I will vanish in helping. -
As he was waiting for days I thought I jumped into, but if you prefer I will vanish in helping.
We are an open forum here. So everyone can help everyone.
-
well guy thx for your help - i take anyones help if it gets my data back
cheers - ahab666
-
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!