RAID5 vanishes from system

    • OMV 1.0
    • Resolved
    • RAID5 vanishes from system

      Hi,
      Could help me getting back my raid5 array up and running : my issue is similar to the other thread but I cannot get it to work and I cannot lose my data(kids photos)!

      Scenario :
      running lastest
      raid 5, 4 disk up and running clean(+ 3 other disk non raid)
      I did change a sata cable to remove a ATA33 error and boot up the machine with an unplgged cable +> result a failed array with 2/4 disk up
      I halt the machine, replugged the cable and rebooted and damn, the raid array has gone

      blkid
      Display Spoiler

      /dev/sda: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="20854919-ab30-ecd7-37f8-16182b2d3d7e" LABEL="omv:raid" TYPE="linux_raid_member"
      /dev/sdc1: UUID="e371bdc4-f48e-4dfd-bb7f-7a714b3d3876" TYPE="ext4"
      /dev/sdc5: UUID="cf803d90-9dc5-4cc0-b9d3-24696f3ac4b3" TYPE="swap"
      /dev/sdb1: LABEL="sdd" UUID="c5c7758b-fdc7-4518-bdee-862606b01f3b" TYPE="ext4"
      /dev/sdd: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="e4be9717-891b-1e5f-ec0a-b0225c27d310" LABEL="omv:raid" TYPE="linux_raid_member"
      /dev/sdf: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="1e802b8d-e276-ff02-1bb1-5eaec30f1dc5" LABEL="omv:raid" TYPE="linux_raid_member"
      /dev/sde: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="80e363b7-335a-103e-9c81-e130ef695269" LABEL="omv:raid" TYPE="linux_raid_member"
      /dev/sdg1: LABEL="sdg" UUID="215b9648-518d-46a5-bd85-1808aee48560" TYPE="ext4"


      lsmod | grep raid
      Display Spoiler
      root@omv:~# lsmod | grep raid
      raid456 48453 0
      async_raid6_recov 12574 1 raid456
      async_memcpy 12387 2 async_raid6_recov,raid456
      async_pq 12605 2 async_raid6_recov,raid456
      async_xor 12422 3 async_pq,async_raid6_recov,raid456
      async_tx 12604 5 async_xor,async_pq,async_memcpy,async_raid6_recov,raid456
      raid6_pq 82624 2 async_pq,async_raid6_recov
      md_mod 87742 1 raid456


      /etc/mdadm/mdadm.conf
      Display Spoiler
      root@omv:~# cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
      root@omv:~#


      cat /proc/mdstat
      Display Spoiler
      ​root@omv:~# cat /proc/mdstat
      Personalities : [raid6] [raid5] [raid4]
      md127 : inactive sda[0] sdd[2]
      3907027120 blocks super 1.2

      unused devices: <none>


      /etc/default/mdadm
      Display Spoiler

      root@omv:~# cat /etc/default/mdadm
      # INITRDSTART:
      # list of arrays (or 'all') to start automatically when the initial ramdisk
      # loads. This list *must* include the array holding your root filesystem. Use
      # 'none' to prevent any array from being started from the initial ramdisk.
      #INITRDSTART='none'

      # AUTOSTART:
      # should mdadm start arrays listed in /etc/mdadm/mdadm.conf automatically
      # during boot?
      AUTOSTART=true

      # AUTOCHECK:
      # should mdadm run periodic redundancy checks over your arrays? See
      # /etc/cron.d/mdadm.
      AUTOCHECK=true

      # START_DAEMON:
      # should mdadm start the MD monitoring daemon during boot?
      START_DAEMON=true

      # DAEMON_OPTIONS:
      # additional options to pass to the daemon.
      DAEMON_OPTIONS="--syslog"

      # VERBOSE:
      # if this variable is set to true, mdadm will be a little more verbose e.g.
      # when creating the initramfs.
      VERBOSE=false
      root@omv:~#


      /etc/fstabb
      Display Spoiler
      root@omv:~# cat /etc/fstab
      # /etc/fstab: static file system information.
      #
      # Use 'blkid' to print the universally unique identifier for a
      # device; this may be used with UUID= as a more robust way to name devices
      # that works even if disks are added and removed. See fstab(5).
      #
      # <file system> <mount point> <type> <options> <dump> <pass>
      proc /proc proc defaults 0 0
      # / was on /dev/sda1 during installation
      UUID=e371bdc4-f48e-4dfd-bb7f-7a714b3d3876 / ext4 errors=remount-ro 0 1
      # swap was on /dev/sda5 during installation
      UUID=cf803d90-9dc5-4cc0-b9d3-24696f3ac4b3 none swap sw 0 0
      /dev/sdb1 /media/usb0 auto rw,user,noauto 0 0
      tmpfs /tmp tmpfs defaults 0 0
      # >>> [openmediavault]
      UUID=cc1efb21-1911-4278-ab3c-6f1176770916 /media/cc1efb21-1911-4278-ab3c-6f1176770916 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
      UUID=c5c7758b-fdc7-4518-bdee-862606b01f3b /media/c5c7758b-fdc7-4518-bdee-862606b01f3b ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      UUID=215b9648-518d-46a5-bd85-1808aee48560 /media/215b9648-518d-46a5-bd85-1808aee48560 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      # <<< [openmediavault]


      mdadm --detail /dev/md127
      Display Spoiler

      root@omv:~# mdadm --detail /dev/md127
      /dev/md127:
      Version : 1.2
      Creation Time : Sun Nov 6 15:00:41 2011
      Raid Level : raid5
      Used Dev Size : 1953512960 (1863.02 GiB 2000.40 GB)
      Raid Devices : 4
      Total Devices : 2
      Persistence : Superblock is persistent

      Update Time : Mon Feb 2 15:56:39 2015
      State : active, FAILED, Not Started
      Active Devices : 2
      Working Devices : 2
      Failed Devices : 0
      Spare Devices : 0

      Layout : left-symmetric
      Chunk Size : 512K

      Name : omv:raid (local to host omv)
      UUID : 7ac6251d:aa39e657:efbbf6cd:b7bca99b
      Events : 398

      Number Major Minor RaidDevice State
      0 8 0 0 active sync /dev/sda
      1 0 0 1 removed
      2 8 48 2 active sync /dev/sdd
      3 0 0 3 removed
      root@omv:~#


      boot extract
      Display Spoiler
      Mon Feb 2 16:13:59 2015: Setting parameters of disc: /dev/sdc.
      Mon Feb 2 16:13:59 2015: /dev/sda.
      Mon Feb 2 16:13:59 2015: /dev/sde.
      Mon Feb 2 16:13:59 2015: /dev/sdd.
      Mon Feb 2 16:13:59 2015: /dev/sdf.
      Mon Feb 2 16:13:59 2015: /dev/sda.
      Mon Feb 2 16:13:59 2015: /dev/sdb.
      Mon Feb 2 16:13:59 2015: /dev/sdf.
      Mon Feb 2 16:13:59 2015: /dev/sdd.
      Mon Feb 2 16:13:59 2015: /dev/sdg.
      Mon Feb 2 16:13:59 2015: /dev/sde.
      Mon Feb 2 16:13:59 2015: /dev/sdc.
      Mon Feb 2 16:13:59 2015: Setting preliminary keymap...done.
      Mon Feb 2 16:13:59 2015: Activating swap...done.
      Mon Feb 2 16:13:59 2015: Checking root file system...fsck from util-linux 2.20.1
      Mon Feb 2 16:13:59 2015: /dev/sdc1: clean, 43461/6021120 files, 744207/24057344 blocks (check in 5 mounts)
      Mon Feb 2 16:13:59 2015: done.
      Mon Feb 2 16:13:59 2015: Loading kernel module loop.
      Mon Feb 2 16:13:59 2015: Cleaning up temporary files... /tmp /lib/init/rw.
      Mon Feb 2 16:13:59 2015: Assembling MD array mdraid_0...failed (not enough devices).
      Mon Feb 2 16:13:59 2015: Assembling MD arrays...done (no arrays found in config file or automatically).
      Mon Feb 2 16:14:00 2015: Setting up LVM Volume Groups... No volume groups found
      Mon Feb 2 16:14:00 2015: No volume groups found
      Mon Feb 2 16:14:00 2015: done.
      Mon Feb 2 16:14:00 2015: Activating lvm and md swap...done.
      Mon Feb 2 16:14:00 2015: Checking file systems...fsck from util-linux 2.20.1
      Mon Feb 2 16:14:00 2015: sdg: clean, 12/244195328 files, 15387403/976754385 blocks
      Mon Feb 2 16:14:00 2015: sdd: clean, 9937/244195328 files, 449433043/976754385 blocks
      Mon Feb 2 16:14:01 2015: done.

      If seems part of the raid array is still there but now mounting nor detected

      Thank you for your help

      Inzeback

      The post was edited 2 times, last by inzeback ().

    • Hello,

      According to this :

      Source Code

      1. /dev/sda: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="20854919-ab30-ecd7-37f8-16182b2d3d7e" LABEL="omv:raid" TYPE="linux_raid_member"
      2. /dev/sdd: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="e4be9717-891b-1e5f-ec0a-b0225c27d310" LABEL="omv:raid" TYPE="linux_raid_member"
      3. /dev/sdf: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="1e802b8d-e276-ff02-1bb1-5eaec30f1dc5" LABEL="omv:raid" TYPE="linux_raid_member"
      4. /dev/sde: UUID="7ac6251d-aa39-e657-efbb-f6cdb7bca99b" UUID_SUB="80e363b7-335a-103e-9c81-e130ef695269" LABEL="omv:raid" TYPE="linux_raid_member"


      and this :

      Source Code

      1. Number Major Minor RaidDevice State
      2. 0 8 0 0 active sync /dev/sda
      3. 1 0 0 1 removed
      4. 2 8 48 2 active sync /dev/sdd
      5. 3 0 0 3 removed


      We can assume that your 2 disks whose have disappeared are /dev/sdf and /dev/sde.

      Pease run :

      Source Code

      1. mdadm --examine /dev/sdf

      and

      Source Code

      1. mdadm --examine /dev/sde

      and check if the line is here :

      Source Code

      1. 1 0 0 1 faulty removed


      If that is true, your disks are not in active mode in the multiple devices node.
      If so, do the following :

      First, stop your array :

      Source Code

      1. mdadm --stop /dev/md127

      and then reassemble it :

      Source Code

      1. mdadm -A --force /dev/md127 /dev/sd[fe]

      - ASROCK FM2A88X-ITX+ (SATAIII (6Gb/s) x6 (for the DATA), mSATA x1 (for the OS) ==> maybe the only afordable motherboard with 6xSATAIII ports)
      - AMD A6 7400K 3.5GHz (overpowerful for NAS but the cheapest for this motherboard)
      - Corsair 2Go DDR3 1333MHz C9 (x2)
      - INTEL GIGABIT CT DESKTOP ADAPTER SINGLE PORT RJ45 PCIE (to avoid backports problems with network controller)
      - COOLER MASTER G450M (80+ bronze)
      - WD Red 2To 64Mo 3.5" SATAIII (6Gb/s) (x4 for the DATA, 2 SATA ports left for future use)
      - 32 Go SSD mSATA KingSpec Half-Size Solid State (for the OS which allows 6 SATA HDDs)
      - Fractal Design Node 304 black (HDD 3.5" x6)
      - RAID 5 xfs
      - OMV 1.12 (Kralizec) - 3.16.0-0.bpo.4-amd64
    • Hi tiste thanks for helping me

      the output of the commands are :
      mdadm --examine /dev/sdf
      Display Spoiler
      /dev/sdf:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x0
      Array UUID : 7ac6251d:aa39e657:efbbf6cd:b7bca99b
      Name : omv:raid (local to host omv)
      Creation Time : Sun Nov 6 15:00:41 2011
      Raid Level : raid5
      Raid Devices : 4

      Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
      Array Size : 5860538880 (5589.05 GiB 6001.19 GB)
      Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB)
      Data Offset : 2048 sectors
      Super Offset : 8 sectors
      State : clean
      Device UUID : 1e802b8d:e276ff02:1bb15eae:c30f1dc5

      Update Time : Mon Feb 2 15:19:07 2015
      Checksum : 9eeb73ea - correct
      Events : 367

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : Active device 3
      Array State : AAAA ('A' == active, '.' == missing)


      mdadm --examine /dev/sde
      Display Spoiler
      root@omv:~# mdadm --examine /dev/sde
      /dev/sde:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x0
      Array UUID : 7ac6251d:aa39e657:efbbf6cd:b7bca99b
      Name : omv:raid (local to host omv)
      Creation Time : Sun Nov 6 15:00:41 2011
      Raid Level : raid5
      Raid Devices : 4

      Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
      Array Size : 5860538880 (5589.05 GiB 6001.19 GB)
      Used Dev Size : 3907025920 (1863.02 GiB 2000.40 GB)
      Data Offset : 2048 sectors
      Super Offset : 8 sectors
      State : clean
      Device UUID : 80e363b7:335a103e:9c81e130:ef695269

      Update Time : Mon Feb 2 15:51:11 2015
      Checksum : 2aececcc - correct
      Events : 367

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : Active device 1
      Array State : AAAA ('A' == active, '.' == missing)




      Source Code

      1. ​ mdadm --stop /dev/md127
      did stop the array OK

      Source Code

      1. root@omv:~# mdadm -A --force /dev/md127 /dev/sd[fe]
      2. mdadm: forcing event count in /dev/sdf(3) from 339 upto 367
      3. mdadm: clearing FAULTY flag for device 1 in /dev/md127 for /dev/sdf
      4. mdadm: Marking array /dev/md127 as 'clean'
      5. mdadm: /dev/md127 assembled from 2 drives - not enough to start the array.


      but the other failed

      Any chance to do more?
    • Ok my bad, reassembling only 2 disks is not enough.
      You should try assembling the whole array with and first of all you can read that raid.wiki.kernel.org/index.php/RAID_Recovery :

      Source Code

      1. mdadm -A --force /dev/md127 /dev/sd[adfe]

      - ASROCK FM2A88X-ITX+ (SATAIII (6Gb/s) x6 (for the DATA), mSATA x1 (for the OS) ==> maybe the only afordable motherboard with 6xSATAIII ports)
      - AMD A6 7400K 3.5GHz (overpowerful for NAS but the cheapest for this motherboard)
      - Corsair 2Go DDR3 1333MHz C9 (x2)
      - INTEL GIGABIT CT DESKTOP ADAPTER SINGLE PORT RJ45 PCIE (to avoid backports problems with network controller)
      - COOLER MASTER G450M (80+ bronze)
      - WD Red 2To 64Mo 3.5" SATAIII (6Gb/s) (x4 for the DATA, 2 SATA ports left for future use)
      - 32 Go SSD mSATA KingSpec Half-Size Solid State (for the OS which allows 6 SATA HDDs)
      - Fractal Design Node 304 black (HDD 3.5" x6)
      - RAID 5 xfs
      - OMV 1.12 (Kralizec) - 3.16.0-0.bpo.4-amd64
    • Nice.
      Happy for you.
      De nada ^^
      - ASROCK FM2A88X-ITX+ (SATAIII (6Gb/s) x6 (for the DATA), mSATA x1 (for the OS) ==> maybe the only afordable motherboard with 6xSATAIII ports)
      - AMD A6 7400K 3.5GHz (overpowerful for NAS but the cheapest for this motherboard)
      - Corsair 2Go DDR3 1333MHz C9 (x2)
      - INTEL GIGABIT CT DESKTOP ADAPTER SINGLE PORT RJ45 PCIE (to avoid backports problems with network controller)
      - COOLER MASTER G450M (80+ bronze)
      - WD Red 2To 64Mo 3.5" SATAIII (6Gb/s) (x4 for the DATA, 2 SATA ports left for future use)
      - 32 Go SSD mSATA KingSpec Half-Size Solid State (for the OS which allows 6 SATA HDDs)
      - Fractal Design Node 304 black (HDD 3.5" x6)
      - RAID 5 xfs
      - OMV 1.12 (Kralizec) - 3.16.0-0.bpo.4-amd64