RAID rebuild on reboot?

  • I've had a drive on my RAID5 array go bad. I replaced it, formatted the new one (GPT, since it's 3TB), and added it to the array using the webUI. Took a long time to sync, but it was working fine.


    Then I got a SpareMissing event email, so I edited mdadm.conf to set spares = 0, and it was all good.


    But recently, I had to reboot the server, and the array was back to 3 devices with a failed one (removed). So I re-added my new hard-disk, and waited for the resync to finish (again) - this time I didn't get the SparesMissing event email, but now I'm worried... will this happen again on reboot? It takes a long time to re-sync... :/ using OMV 2.1.23 now.


    My array (/dev/md127 - sda is the new drive):



    My mdadm.conf:



    Any help appreciated!

    • Offizieller Beitrag

    Right forum but we don't know if the array will have to resync on next reboot.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Well, I just had to reboot and I got the same thing - mdstat complains there's a "removed" disk, I re-add the new one I bought, and it's off to rebuild again! Do I need to "save" the array's configuration somehow? Isn't adding it on the WebUI enough? And I'm pretty sure I'll get the "spare missing" or whatever post again when the rebuild ends.


    Please help :/ I'm on OMV 2.1.25

  • I have the same issue 8x4tb disks off and IBM 1015 in IT mode Raid 6, fresh array just created a month or less ago.


    Everytime I reboot, i get a removed disk. Its not always the same disk system boots up and shows disk removed.


    I rebuild then the same thing happens again, i was on version 2.1 now 2.2 as of this morning.


    Current stats


    /dev/md1:
    Version : 1.2
    Creation Time : Sun Dec 27 16:42:53 2015
    Raid Level : raid6
    Array Size : 23441323008 (22355.39 GiB 24003.91 GB)
    Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
    Raid Devices : 8
    Total Devices : 8
    Persistence : Superblock is persistent


    Update Time : Wed Feb 24 10:25:30 2016
    State : clean
    Active Devices : 8
    Working Devices : 8
    Failed Devices : 0
    Spare Devices : 0


    Layout : left-symmetric
    Chunk Size : 512K


    Name : OMV:stor1 (local to host OMV)
    UUID : d793de05:ac9bbfdc:326b3fbb:74949d64
    Events : 507463


    Number Major Minor RaidDevice State
    0 8 96 0 active sync /dev/sdg
    10 8 32 1 active sync /dev/sdc
    2 8 64 2 active sync /dev/sde
    3 8 112 3 active sync /dev/sdh
    5 8 80 4 active sync /dev/sdf
    4 8 48 5 active sync /dev/sdd
    8 8 0 6 active sync /dev/sda
    9 8 16 7 active sync /dev/sdb


    I rebuilt after this last boot, if start the OS with mdadm off and run a resemble and add the disk in all is well but the OS likes to override my trying to stop it from autostarting mdadm so mostly it just starts with a missing disk.


    heres some of the logs from when the disk is kicked


    b 10 13:10:38 OMV kernel: [ 10.629412] md/raid:md1: device sdb operational as raid disk 2
    Feb 10 13:18:01 OMV kernel: [ 8.890599] sd 0:0:18:0: [sdb] physical block alignment offset: 4096
    Feb 10 13:18:01 OMV kernel: [ 8.890602] sd 0:0:18:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
    Feb 10 13:18:01 OMV kernel: [ 8.890604] sd 0:0:18:0: [sdb] 4096-byte physical blocks
    Feb 10 13:18:01 OMV kernel: [ 9.013598] sd 0:0:18:0: [sdb] Write Protect is off
    Feb 10 13:18:01 OMV kernel: [ 9.020049] sd 0:0:18:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Feb 10 13:18:01 OMV kernel: [ 9.153070] sdb: unknown partition table
    Feb 10 13:18:01 OMV kernel: [ 9.311346] sd 0:0:18:0: [sdb] Attached SCSI disk
    Feb 10 13:18:01 OMV kernel: [ 9.969457] md: bind<sdb>
    Feb 10 13:18:01 OMV kernel: [ 9.970922] md: kicking non-fresh sdb from array!
    Feb 10 13:18:01 OMV kernel: [ 9.970925] md: unbind<sdb>
    Feb 10 13:18:01 OMV kernel: [ 9.994521] md: export_rdev(sdb)


    This behavior seems to be entirely reproducible


    For now i am trying to avoid reboots and trying to figure out how to start OMV without it trying to assemble the array.


    PS the disks all pass smart short and long tests and read/write tests and it chooses a different disk to kick each time it seems.

  • Got the same Problem


    After adding a new Harddisk to my RAID 5 everything worked fine but after the rebbot it wennt missing and i must resync.


    This is the mail


    Zitat

    This is an automatically generated mail message from mdadmrunning on NAS-OMV A SparesMissing event had been detected on md device /dev/md0. Faithfully yours, etc.
    P.S. The /proc/mdstat file currently contains the following: Personalities : [raid6] [raid5] [raid4]md0 : active raid5 sda[0] sde[3] sdd[2] sdc[1] 11720540160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_] unused devices: <none>


    Please let me know if you need another information. Im new on linux and need a little bit more detailed.

    • Offizieller Beitrag

    Please let me know if you need another information. Im new on linux and need a little bit more detailed.


    Degraded or missing raid array questions


    I have no idea why people are losing their array on reboot. After doing lots of research, this isn't an OMV issue or even a Debian issue. It is an mdadm issue. I see report after report on all the distros (Arch, CentOS, Debian, ubuntu, etc).

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    If it is mdadm raid10, you could still potentially have the problem. While I haven't had any issues, this is happening on all raid levels.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Zitat von neodata: „Please let me know if you need another information. Im new on linux and need a little bit more detailed.“
    <a href="http://forums.openmediavault.org/index.php/Thread/8631-Degraded-or-missing-raid-array-questions/">Degraded or missing raid array questions</a>


    I have no idea why people are losing their array on reboot. After doing lots of…


    Code
    cat /proc/mdstat


    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sda[0] sde[3] sdd[2] sdc[1]
    11720540160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]


    unused devices: <none>



    Code
    blkid



    /dev/sdc: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="2aff9e51-39ad-3bf5-9d39-417d5400e7e6" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
    /dev/sdd: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="a3aa903a-b1ed-b4b1-eb90-69c49e3106f4" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
    /dev/sda: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="58952015-7d8d-9eba-5a15-18e897b8003b" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
    /dev/md0: LABEL="Daten" UUID="1fed2e7a-967b-4473-877f-11a947f88b38" TYPE="ext4"
    /dev/sde: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="ecc83b31-85a5-9802-c0f5-b3482e7c137d" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
    /dev/sdf1: UUID="e18308d2-eea9-4a61-aba0-b86b84025eab" TYPE="ext4"
    /dev/sdf5: UUID="473d9cc6-c276-4122-8fcd-b12ad6475a14" TYPE="swap"



    Code
    cat /etc/mdadm/mdadm.conf


    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md0 metadata=1.2 spares=1 name=NAS-OMV:meinRAID5 UUID=32d3f50c:d25c4482:7835bf0b:90cd4649


    # instruct the monitoring daemon where to send mail alerts
    MAILADDR ************
    MAILFROM root


    Code
    mdadm --detail --scan --verbose


    ARRAY /dev/md0 level=raid5 num-devices=5 metadata=1.2 name=NAS-OMV:meinRAID5 UUID=32d3f50c:d25c4482:7835bf0b:90cd4649
    devices=/dev/sda,/dev/sdc,/dev/sdd,/dev/sde


    this is with the missing harddrive. over the night i let it resync and post it again if needed.

    • Offizieller Beitrag

    No output of fdisk -l. I don't see any sign of a fifth drive. Did it fail?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ah sorry i forget to copy it my misstake :> sdb is the missing hard drive it seems it has an error. Am i right?


    Code
    fdisk -l


    Disk /dev/sde: 3000.6 GB, 3000592982016 bytes
    255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sde doesn't contain a valid partition table


    Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
    255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdd doesn't contain a valid partition table


    Disk /dev/sdf: 32.0 GB, 32017047552 bytes
    255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00079103


    Device Boot Start End Blocks Id System
    /dev/sdf1 * 2048 59895807 29946880 83 Linux
    /dev/sdf2 59897854 62531583 1316865 5 Extended
    /dev/sdf5 59897856 62531583 1316864 82 Linux swap / Solaris


    WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


    Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
    256 heads, 63 sectors/track, 363376 cylinders, total 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x0086110b


    Device Boot Start End Blocks Id System
    /dev/sdb1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.


    Disk /dev/sda: 3000.6 GB, 3000592982016 bytes
    255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sda doesn't contain a valid partition table


    Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
    255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdc doesn't contain a valid partition table


    Disk /dev/md0: 12001.8 GB, 12001833123840 bytes
    2 heads, 4 sectors/track, -1364832256 cylinders, total 23441080320 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 524288 bytes / 2097152 bytes
    Disk identifier: 0x00000000

    • Offizieller Beitrag

    Try (as root):


    mdadm --stop /dev/md0
    mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcde]

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I tryed but then came this


    mdadm --stop /dev/md0


    mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?

    • Offizieller Beitrag

    What is the output of: cat /proc/mdstat

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • cat /proc/mdstat


    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sda[0] sde[3] sdd[2] sdc[1]
    11720540160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]


    unused devices: <none>

    • Offizieller Beitrag

    There is something wrong with /dev/sdb. You could try wiping and then retry the previous commands to rebuild again.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!