loss of a hard drive, after reboot my raid 10 is no longer visible

  • after the loss of a hard drive, after reboot my raid 10 is no longer visible in OMV



    at startup the computer tells me Failed To Run_array well on the



    backup did not work



    A solution if above is possible see link = raid.wiki.kernel.org/index.php/RAID_Recovery



    Can you help me I have never done such manipulation



    In addition, SSH does not work properly! Unable to connect.


    content1570725926.txt
    ------------------------------------------------------------------------------------------------------------------



    après la perte d'un disque dur, après redémarrage mon raid 10 n'est plus visible dans OMV
    au démarrage l'ordinateur me dit Failed To Run_array bien sur la
    sauvegarde n’ai pas opérationnel



    Une solution si dessus est possible voir lien = raid.wiki.kernel.org/index.php/RAID_Recovery



    Pouvez-vous m'aider je n'ais jamais procédé à une telle manipulation



    De plus SSH ne veut pas fonctionné correctement ! Impossible de me connecter.


    content1570725926.txt

  • Hello,

    here is more information
    ********************************
    Bonjour,


    voici plus d'information
    ---------------------------------------------------------
    cat /proc/mdstat
    =
    Personalities : [raid10]
    unused devices: <none>
    --------------------------------------------------------
    blkid
    =
    -dash: 2: blkid: not found
    -------------------------------------------------------
    fdisk -l | grep "Disk "
    =
    -dash: 3: fdisk: not found
    ------------------------------------------------------
    cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #



    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions



    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes



    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>



    # definitions of existing MD arrays
    ARRAY /dev/md/Raid10 metadata=1.2 spares=1 name=TLC-NAS-OMV:Raid10 UUID=0d6d4bc1:dfa41e74:f351c0c2:32df8c5c


    ---------------------------------------------------
    6 hard drives of 931Go mounted in raid 10 // 6 disques dur de 931Go monté en raid 10
    ---------------------------------------------------
    following the loss of a hard disk (faulty sata connector) stop then reboot the raid do not know to go up.
    //
    suite a la perte d'un disque dur (connecteur sata défectueux) arrêt puis redémarrage le raid ne sais pas remonter.
    ---------------------------------------------------


    if a person can help me // si une personne peut m'aider


    original discs are set aside and discs (certified copy) are installed to prevent data loss
    //
    les disques originaux sont mis de coté et des disque (copie conforme) sont installer pour éviter toute perte de data


    cordially
    //
    Cordialement

    • Official Post

    here is more information

    Most of your information is not helpful since you did not run the commands as root.

    omv 7.4.7-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.3 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!