Raid startup problem

  • Hi all,
    I got 4 disk ( 2tb samsung f3 hd203wi) in raid 5 but sometime's when i reboot er cold start it says that it could not find the drives. i hear them power up.


    I think OMV is to fast with the boot.


    Can i do something about it?
    Can i delay the boot or some other tips?

  • Maybe this is related?


    I get this message al the time.


    -----------------------
    This is an automatically generated mail message from mdadm
    running on NAS

    A SparesMissing event had been detected on md device /dev/md/Volume.

    Faithfully yours, etc.

    P.S. The /proc/mdstat file currently contains the following:

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid5 sda[4] sdd[5] sdc[2] sdb[1]
    5860538880 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

    unused devices: <none>
    --------------------------

  • Not solved :(


    I switched to omv again because the transfer rate in esxi was to low.
    I still get the message. I switched the harddisk for a new one but it works a couple of weeks and then the message comes again.


    What to do?

  • Hi,


    Try the "root delay" option in grub...


    Google and the search engine of this forum are your friends. The solution was already given here...
    http://forums.openmediavault.o…opic.php?f=11&t=433#p1460


    Cheers,

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

  • It does not look like you have a spare at all.


    So you have a raid 5 with 4 disks (3+1) and no spares. That is quite normal for home use. The message simply states that you do not have any spares left (okay the real message should be: You never had configured any spares). It should warn you, that your last spare is used in the raid and you should replace the broken disk ...


    All that useless if you do not have a spare and never intend to have a spare.


    So no, your issue is not related to a spare. Cause the spare will even if missing not interrupt or prevent the boot process.

    Everything is possible, sometimes it requires Google to find out how.

  • I have Raid 10 now, and still have problems.
    The disk are fine, but sometimes without a reason 1 of them drops out of the raid.
    ( and are no longer visible in OMV) When i take it out and put it back in i erase the disk and can rebuild it.


    After a couple of weeks a other disk drops out of the raid for no reason.
    I have tested the disks in my windows machine and they look fine. ( used seatools, seagate )







    ----------------------------------------------------------------------------------


    This is an automatically generated mail message from mdadm
    running on NAS

    A DegradedArray event had been detected on md device /dev/md0.

    Faithfully yours, etc.

    P.S. The /proc/mdstat file currently contains the following:

    Personalities : [raid6] [raid5] [raid4] [raid10]
    md0 : active raid10 sdf[9] sdj[8] sde[0] sdc[7] sdb[5] sda[4](F) sdi[3] sdh[2]
    7814051840 blocks super 1.2 512K chunks 2 near-copies [8/6] [UUUU_U_U]
    [==>..................] recovery = 13.7% (267903616/1953512960) finish=313.0min speed=89748K/sec

    unused devices: <none>



    ----------------------------------------------------------------------------------
    This is an automatically generated mail message from mdadm
    running on NAS

    A Fail event had been detected on md device /dev/md0.

    It could be related to component device /dev/sda.

    Faithfully yours, etc.

    P.S. The /proc/mdstat file currently contains the following:

    Personalities : [raid6] [raid5] [raid4] [raid10]
    md0 : active raid10 sde[0] sdc[7] sdb[5] sda[4](F) sdi[3] sdh[2]
    7814051840 blocks super 1.2 512K chunks 2 near-copies [8/5] [U_UU_U_U]

    unused devices: <none>

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!