loss of a hard drive, after reboot my raid 10 is no longer visible

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • loss of a hard drive, after reboot my raid 10 is no longer visible

      after the loss of a hard drive, after reboot my raid 10 is no longer visible in OMV


      at startup the computer tells me Failed To Run_array well on the


      backup did not work


      A solution if above is possible see link = raid.wiki.kernel.org/index.php/RAID_Recovery


      Can you help me I have never done such manipulation


      In addition, SSH does not work properly! Unable to connect.

      content1570725926.txt
      ------------------------------------------------------------------------------------------------------------------


      après la perte d'un disque dur, après redémarrage mon raid 10 n'est plus visible dans OMV
      au démarrage l'ordinateur me dit Failed To Run_array bien sur la
      sauvegarde n’ai pas opérationnel


      Une solution si dessus est possible voir lien = raid.wiki.kernel.org/index.php/RAID_Recovery


      Pouvez-vous m'aider je n'ais jamais procédé à une telle manipulation


      De plus SSH ne veut pas fonctionné correctement ! Impossible de me connecter.

      content1570725926.txt

      The post was edited 4 times, last by Tom019 ().

    • Hello,

      here is more information
      ********************************
      Bonjour,

      voici plus d'information
      ---------------------------------------------------------
      cat /proc/mdstat
      =
      Personalities : [raid10]
      unused devices: <none>
      --------------------------------------------------------
      blkid
      =
      -dash: 2: blkid: not found
      -------------------------------------------------------
      fdisk -l | grep "Disk "
      =
      -dash: 3: fdisk: not found
      ------------------------------------------------------
      cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #


      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions


      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes


      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>


      # definitions of existing MD arrays
      ARRAY /dev/md/Raid10 metadata=1.2 spares=1 name=TLC-NAS-OMV:Raid10 UUID=0d6d4bc1:dfa41e74:f351c0c2:32df8c5c

      ---------------------------------------------------
      6 hard drives of 931Go mounted in raid 10 // 6 disques dur de 931Go monté en raid 10
      ---------------------------------------------------
      following the loss of a hard disk (faulty sata connector) stop then reboot the raid do not know to go up.
      //
      suite a la perte d'un disque dur (connecteur sata défectueux) arrêt puis redémarrage le raid ne sais pas remonter.
      ---------------------------------------------------

      if a person can help me // si une personne peut m'aider

      original discs are set aside and discs (certified copy) are installed to prevent data loss
      //
      les disques originaux sont mis de coté et des disque (copie conforme) sont installer pour éviter toute perte de data

      cordially
      //
      Cordialement
      Images
      • P_20191104_211913_vHDR_Auto_21.jpg

        593.7 kB, 3,878×682, viewed 8 times
    • Tom019 wrote:

      here is more information
      Most of your information is not helpful since you did not run the commands as root.
      omv 5.1.3 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!