Hard system disk crash ! (really dead)

    • OMV 2.x
    • Resolved
    • Hard system disk crash ! (really dead)

      Hello,

      Let me open this topic because I have not found the information by searching.
      I present my apologies in advance if the answer was already in another thread.

      Here is my problem:

      My system disk that contains openmediavault is a Compact Flash card type, mounted on an IDE adapter. The rest of my records are a bunch of 7x2To raid 5.

      Now my system drive me let go. No more boot, no detection via live cd (clonezilla, deft, etc ...). No detection either by plugging the card on a compact flash drive. (Yes I think she's dead ...)

      Of course, I have no backup of the system disk ...

      My question is this:

      Is it possible to get my 7x2To cluster with a new installation on a new IDE drive ?
      Or is it possible to collect data on whether the relocation does not allow it ? (that the data are spread on the disc)

      In advance thank you for your answers.

      Version : 2.2.1 (Stone burner)


      PS: Please excuse my google translation to English.
    • @Naabster yes, you not lost your data on the cluster after new installation.
      The flashmemory plugin extended quite considerably the life of the boot medium. You have to install AND configure!

      1. shutdown the OMV machine.
      2. disconnect the data hdd
      3. power on the machine and install OMV
      4. install omv extras org
      5. install and configure flashmemory plugin
      6. shutdown the OMV machine
      7. reconnect the data hdd
      8. power on the OMV machine and you will see your data system
      9. configure the OMV system

      The post was edited 2 times, last by omavoss ().

    • I may (hope) have said something stupid ...

      In fact, all my disks are listed in physical disks:

      7 x 2 To + 2 x 250 Go, like the picture :
      [IMG:http://pix.toile-libre.org/upload/img/1458077527.png]

      When I go into managing raid, I just see that the volumes of the Debian installation. (2 x 250 Go in RAID 1 --> 2 Go /boot + 45 Go swap + 200 Go for / )

      [IMG:http://pix.toile-libre.org/upload/img/1458077792.png]

      But when I want to create a new volume, he sees only one disk 2 TB.

      [IMG:http://pix.toile-libre.org/upload/img/1458077838.png]

      Because actually, my cluster comprised 5 disks in RAID 5 + 1 spare, the last disk was there in case ... (not in cluster)

      I concluded that my cluster to be seen by the system but not through the web interface ....

      Suggestions to try mounting by hand?
    • @omavoss, @ryecoaaron, a big thank to track my problem !
      • The output of 'cat /proc/mdstat' :

      Source Code

      1. Personalities : [raid1]
      2. md2 : active raid1 sda3[0] sdc3[1]
      3. 195181440 blocks super 1.2 [2/2] [UU]
      4. md1 : active (auto-read-only) raid1 sda2[0] sdc2[1]
      5. 46841728 blocks super 1.2 [2/2] [UU]
      6. md0 : active raid1 sda1[0] sdc1[1]
      7. 1950656 blocks super 1.2 [2/2] [UU]
      8. unused devices: <none>
      Display All

      • The output of 'blkid' :

      Source Code

      1. dev/md1: UUID="01212447-d186-49bd-a473-922b5ceece74" TYPE="swap"
      2. /dev/md0: UUID="ec7597c0-795d-4f22-9d70-5dbf400333fd" TYPE="ext2"
      3. /dev/md2: UUID="07d5995c-b06a-4f1d-91aa-86e44af71aac" TYPE="ext4"
      4. /dev/sdb: UUID="cdf25234-31f7-5570-5a62-319115939f7a" UUID_SUB="0e3c014f-7e81-f854-662a-c718ab536b32" LABEL="the-vault:warehouse" TYPE="linux_raid_member"
      5. /dev/sdi: UUID="cdf25234-31f7-5570-5a62-319115939f7a" UUID_SUB="1d949b23-5ed5-0306-2d84-cecaf13941e0" LABEL="the-vault:warehouse" TYPE="linux_raid_member"
      6. /dev/sdf: UUID="cdf25234-31f7-5570-5a62-319115939f7a" UUID_SUB="6ad79752-d38c-6893-609f-da9856d225a0" LABEL="the-vault:warehouse" TYPE="linux_raid_member"
      7. /dev/sda1: UUID="4e557c08-5890-9801-fb62-2a57d87dc65d" UUID_SUB="362b80af-4492-63de-2acc-fac0707e1b4b" LABEL="Hightower:0" TYPE="linux_raid_member"
      8. /dev/sda2: UUID="bb326408-2c33-de89-c6f0-33c1a1010c8f" UUID_SUB="3ec40ac7-2976-8de7-3aa3-378604a85f60" LABEL="Hightower:1" TYPE="linux_raid_member"
      9. /dev/sda3: UUID="09d4b1a6-51ce-7a4a-4373-6b37acc34724" UUID_SUB="de1e47d3-e5fa-ec8f-52c3-742df3724a61" LABEL="Hightower:2" TYPE="linux_raid_member"
      10. /dev/sdc1: UUID="4e557c08-5890-9801-fb62-2a57d87dc65d" UUID_SUB="554fbe5a-2b4a-7fed-e694-bc37be96541d" LABEL="Hightower:0" TYPE="linux_raid_member"
      11. /dev/sdc2: UUID="bb326408-2c33-de89-c6f0-33c1a1010c8f" UUID_SUB="b79828d2-3170-a7d6-5f89-970b5525c6f8" LABEL="Hightower:1" TYPE="linux_raid_member"
      12. /dev/sdc3: UUID="09d4b1a6-51ce-7a4a-4373-6b37acc34724" UUID_SUB="5309242a-a9cc-931c-d1a6-51d88fcd4dfa" LABEL="Hightower:2" TYPE="linux_raid_member"
      13. /dev/sde: UUID="cdf25234-31f7-5570-5a62-319115939f7a" UUID_SUB="280066e6-de60-454e-3c10-8fab18f7540e" LABEL="the-vault:warehouse" TYPE="linux_raid_member"
      14. /dev/sdg: UUID="cdf25234-31f7-5570-5a62-319115939f7a" UUID_SUB="dc60ded4-03d9-ccdb-39f9-40e9d38cbb14" LABEL="the-vault:warehouse" TYPE="linux_raid_member"
      15. /dev/sdh: UUID="cdf25234-31f7-5570-5a62-319115939f7a" UUID_SUB="6bba2ded-6adb-3c7b-5119-f2c24f215962" LABEL="the-vault:warehouse" TYPE="linux_raid_member"
      Display All

      • The output of 'fdisk -l' :

      Source Code

      1. Disque /dev/sda : 250.0 Go, 250000000000 octets
      2. 255 têtes, 63 secteurs/piste, 30394 cylindres, total 488281250 secteurs
      3. Unités = secteurs de 1 * 512 = 512 octets
      4. Taille de secteur (logique / physique) : 512 octets / 512 octets
      5. taille d'E/S (minimale / optimale) : 512 octets / 512 octets
      6. Identifiant de disque : 0x58bf64e1
      7. Périphérique Amorce Début Fin Blocs Id Système
      8. /dev/sda1 * 2048 3905535 1951744 fd RAID Linux autodétecté
      9. /dev/sda2 3905536 97654783 46874624 fd RAID Linux autodétecté
      10. /dev/sda3 97654784 488280063 195312640 fd RAID Linux autodétecté
      11. Disque /dev/sdc : 250.0 Go, 250000000000 octets
      12. 255 têtes, 63 secteurs/piste, 30394 cylindres, total 488281250 secteurs
      13. Unités = secteurs de 1 * 512 = 512 octets
      14. Taille de secteur (logique / physique) : 512 octets / 512 octets
      15. taille d'E/S (minimale / optimale) : 512 octets / 512 octets
      16. Identifiant de disque : 0x984ae2d7
      17. Périphérique Amorce Début Fin Blocs Id Système
      18. /dev/sdc1 2048 3905535 1951744 fd RAID Linux autodétecté
      19. /dev/sdc2 3905536 97654783 46874624 fd RAID Linux autodétecté
      20. /dev/sdc3 97654784 488280063 195312640 fd RAID Linux autodétecté
      21. Disque /dev/sdd : 2000.4 Go, 2000398934016 octets
      22. 255 têtes, 63 secteurs/piste, 243201 cylindres, total 3907029168 secteurs
      23. Unités = secteurs de 1 * 512 = 512 octets
      24. Taille de secteur (logique / physique) : 512 octets / 4096 octets
      25. taille d'E/S (minimale / optimale) : 4096 octets / 4096 octets
      26. Identifiant de disque : 0x00000000
      27. Le disque /dev/sdd ne contient pas une table de partitions valable
      28. Disque /dev/sde : 2000.4 Go, 2000398934016 octets
      29. 255 têtes, 63 secteurs/piste, 243201 cylindres, total 3907029168 secteurs
      30. Unités = secteurs de 1 * 512 = 512 octets
      31. Taille de secteur (logique / physique) : 512 octets / 4096 octets
      32. taille d'E/S (minimale / optimale) : 4096 octets / 4096 octets
      33. Identifiant de disque : 0x00000000
      34. Le disque /dev/sde ne contient pas une table de partitions valable
      35. Disque /dev/sdf : 2000.4 Go, 2000398934016 octets
      36. 255 têtes, 63 secteurs/piste, 243201 cylindres, total 3907029168 secteurs
      37. Unités = secteurs de 1 * 512 = 512 octets
      38. Taille de secteur (logique / physique) : 512 octets / 4096 octets
      39. taille d'E/S (minimale / optimale) : 4096 octets / 4096 octets
      40. Identifiant de disque : 0x00000000
      41. Le disque /dev/sdf ne contient pas une table de partitions valable
      42. Disque /dev/sdg : 2000.4 Go, 2000398934016 octets
      43. 255 têtes, 63 secteurs/piste, 243201 cylindres, total 3907029168 secteurs
      44. Unités = secteurs de 1 * 512 = 512 octets
      45. Taille de secteur (logique / physique) : 512 octets / 4096 octets
      46. taille d'E/S (minimale / optimale) : 4096 octets / 4096 octets
      47. Identifiant de disque : 0x00000000
      48. Le disque /dev/sdg ne contient pas une table de partitions valable
      49. Disque /dev/sdi : 2000.4 Go, 2000398934016 octets
      50. 255 têtes, 63 secteurs/piste, 243201 cylindres, total 3907029168 secteurs
      51. Unités = secteurs de 1 * 512 = 512 octets
      52. Taille de secteur (logique / physique) : 512 octets / 4096 octets
      53. taille d'E/S (minimale / optimale) : 4096 octets / 4096 octets
      54. Identifiant de disque : 0x00000000
      55. Le disque /dev/sdi ne contient pas une table de partitions valable
      56. Disque /dev/sdh : 2000.4 Go, 2000398934016 octets
      57. 255 têtes, 63 secteurs/piste, 243201 cylindres, total 3907029168 secteurs
      58. Unités = secteurs de 1 * 512 = 512 octets
      59. Taille de secteur (logique / physique) : 512 octets / 4096 octets
      60. taille d'E/S (minimale / optimale) : 4096 octets / 4096 octets
      61. Identifiant de disque : 0x00000000
      62. Le disque /dev/sdh ne contient pas une table de partitions valable
      63. Disque /dev/sdb : 2000.4 Go, 2000398934016 octets
      64. 255 têtes, 63 secteurs/piste, 243201 cylindres, total 3907029168 secteurs
      65. Unités = secteurs de 1 * 512 = 512 octets
      66. Taille de secteur (logique / physique) : 512 octets / 4096 octets
      67. taille d'E/S (minimale / optimale) : 4096 octets / 4096 octets
      68. Identifiant de disque : 0x00000000
      69. Le disque /dev/sdb ne contient pas une table de partitions valable
      70. Disque /dev/md0 : 1997 Mo, 1997471744 octets
      71. 2 têtes, 4 secteurs/piste, 487664 cylindres, total 3901312 secteurs
      72. Unités = secteurs de 1 * 512 = 512 octets
      73. Taille de secteur (logique / physique) : 512 octets / 512 octets
      74. taille d'E/S (minimale / optimale) : 512 octets / 512 octets
      75. Identifiant de disque : 0x00000000
      76. Le disque /dev/md0 ne contient pas une table de partitions valable
      77. Disque /dev/md1 : 48.0 Go, 47965929472 octets
      78. 2 têtes, 4 secteurs/piste, 11710432 cylindres, total 93683456 secteurs
      79. Unités = secteurs de 1 * 512 = 512 octets
      80. Taille de secteur (logique / physique) : 512 octets / 512 octets
      81. taille d'E/S (minimale / optimale) : 512 octets / 512 octets
      82. Identifiant de disque : 0x00000000
      83. Le disque /dev/md1 ne contient pas une table de partitions valable
      84. Disque /dev/md2 : 199.9 Go, 199865794560 octets
      85. 2 têtes, 4 secteurs/piste, 48795360 cylindres, total 390362880 secteurs
      86. Unités = secteurs de 1 * 512 = 512 octets
      87. Taille de secteur (logique / physique) : 512 octets / 512 octets
      88. taille d'E/S (minimale / optimale) : 512 octets / 512 octets
      89. Identifiant de disque : 0x00000000
      90. Le disque /dev/md2 ne contient pas une table de partitions valable
      Display All


      While this can be useful, here is the output :
      • mdadm --detail --scan

      Source Code

      1. ARRAY /dev/md/0 metadata=1.2 name=Hightower:0 UUID=4e557c08:58909801:fb622a57:d87dc65d
      2. ARRAY /dev/md/1 metadata=1.2 name=Hightower:1 UUID=bb326408:2c33de89:c6f033c1:a1010c8f
      3. ARRAY /dev/md/2 metadata=1.2 name=Hightower:2 UUID=09d4b1a6:51ce7a4a:43736b37:acc34724

      • mdadm --examine --scan :

      Source Code

      1. ARRAY /dev/md/0 metadata=1.2 UUID=4e557c08:58909801:fb622a57:d87dc65d name=Hightower:0
      2. ARRAY /dev/md/1 metadata=1.2 UUID=bb326408:2c33de89:c6f033c1:a1010c8f name=Hightower:1
      3. ARRAY /dev/md/2 metadata=1.2 UUID=09d4b1a6:51ce7a4a:43736b37:acc34724 name=Hightower:2
      4. ARRAY /dev/md/warehouse metadata=1.2 UUID=cdf25234:31f75570:5a623191:15939f7a name=the-vault:warehouse
      5. spares=1


      my cluster is there, but I can not mount her
      Especially as the device "/dev/md/warehouse" does not exist, I do not understand where the command "mdadm --examine --scan" will look for the result ...

      Source Code

      1. ls -l /dev/md/
      2. total 0
      3. lrwxrwxrwx 1 root root 6 mars 16 16:43 0 -> ../md0
      4. lrwxrwxrwx 1 root root 6 mars 16 16:43 1 -> ../md1
      5. lrwxrwxrwx 1 root root 6 mars 16 16:43 2 -> ../md2
    • @ryecoaaron

      The output with more test :

      Source Code

      1. mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdefghi]
      2. mdadm: looking for devices for /dev/md127
      3. mdadm: no RAID superblock on /dev/sde
      4. mdadm: /dev/sde has no superblock - assembly aborted
      5. root@Hightower:~# mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfhi]
      6. mdadm: looking for devices for /dev/md127
      7. mdadm: no RAID superblock on /dev/sdh
      8. mdadm: /dev/sdh has no superblock - assembly aborted
      9. root@Hightower:~# mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfi]
      10. mdadm: looking for devices for /dev/md127
      11. mdadm: /dev/sdb is identified as a member of /dev/md127, slot 1.
      12. mdadm: /dev/sdd is identified as a member of /dev/md127, slot 3.
      13. mdadm: /dev/sdf is identified as a member of /dev/md127, slot 4.
      14. mdadm: /dev/sdi is identified as a member of /dev/md127, slot -1.
      15. mdadm: no uptodate device for slot 0 of /dev/md127
      16. mdadm: no uptodate device for slot 2 of /dev/md127
      17. mdadm: added /dev/sdd to /dev/md127 as 3
      18. mdadm: added /dev/sdf to /dev/md127 as 4
      19. mdadm: added /dev/sdi to /dev/md127 as -1
      20. mdadm: added /dev/sdb to /dev/md127 as 1
      21. mdadm: /dev/md127 assembled from 3 drives and 1 spare - not enough to start the array
      Display All
    • Since sde and sdh don't seem to have superblocks and I thought you were using a 7 drive array, try:
      mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfgi]
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Source Code

      1. mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfgi]
      2. mdadm: looking for devices for /dev/md127
      3. mdadm: /dev/sdb is busy - skipping
      4. mdadm: /dev/sdd is busy - skipping
      5. mdadm: /dev/sdf is busy - skipping
      6. mdadm: Cannot assemble mbr metadata on /dev/sdg
      7. mdadm: /dev/sdg has no superblock - assembly aborted


      but there's better :

      Source Code

      1. cat /proc/mdstat
      2. Personalities : [raid1]
      3. md127 : inactive sdb[1](S) sdi[6](S) sdf[4](S) sdd[3](S)
      4. 7813534048 blocks super 1.2
      5. md2 : active raid1 sde3[0] sdg3[1]
      6. 195181440 blocks super 1.2 [2/2] [UU]
      7. md1 : active (auto-read-only) raid1 sde2[0] sdg2[1]
      8. 46841728 blocks super 1.2 [2/2] [UU]
    • I didn't realize it assembled in your previous output. So, you need to stop it before assembling again.

      mdadm --stop /dev/md127
      mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfgi]

      But it looks like sdg is missing superblock data. I don't know what your original drives were in the raid. So, you can keep trying different combinations.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Source Code

      1. mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfgi]
      2. mdadm: looking for devices for /dev/md127
      3. mdadm: no RAID superblock on /dev/sdg
      4. mdadm: /dev/sdg has no superblock - assembly aborted


      Differents cominations it same for /dev/sde & /dev/sdg :
      - no superblock
      or
      - ressource is busy

      My last comand is :

      Source Code

      1. mdadm -v --create /dev/md127 --assume-clean --level=5 --chunk=64 --raid-devices=5 --spare-devices=0 /dev/sdb /dev/sdd /dev/sdf /dev/sdi missing
      2. mdadm: layout defaults to left-symmetric
      3. mdadm: layout defaults to left-symmetric
      4. mdadm: /dev/sdb appears to be part of a raid array:
      5. level=raid5 devices=5 ctime=Sat Mar 21 19:12:11 2015
      6. mdadm: /dev/sdd appears to be part of a raid array:
      7. level=raid5 devices=5 ctime=Sat Mar 21 19:12:11 2015
      8. mdadm: /dev/sdf appears to be part of a raid array:
      9. level=raid5 devices=5 ctime=Sat Mar 21 19:12:11 2015
      10. mdadm: /dev/sdi appears to be part of a raid array:
      11. level=raid5 devices=5 ctime=Sat Mar 21 19:12:11 2015
      12. mdadm: size set to 1953383360K
      13. Continue creating array?
      Display All


      But i 'm to afraid to say YES ....
    • Very dangerous to use --create. I have seen some users have luck but others get a nice clean empy array. I will never tell a user to use clean. Too risky for me. The --assume-clean helps. The missing drive makes it even more dangerous. I can't recommend anything here.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • @ryecoaaron, @omavoss

      I'm sorry, I'm a big newbie...

      At no time have I thought of using --assemble option !!! Yet the solution was there !!!
      • I even once the relaunch --examine option but with verbose mode

      Source Code

      1. mdadm --examine --scan -v
      2. ARRAY /dev/md/0 level=raid1 metadata=1.2 num-devices=2 UUID=4e557c08:58909801:fb622a57:d87dc65d name=Hightower:0
      3. devices=/dev/sdg1,/dev/sde1
      4. ARRAY /dev/md/1 level=raid1 metadata=1.2 num-devices=2 UUID=bb326408:2c33de89:c6f033c1:a1010c8f name=Hightower:1
      5. devices=/dev/sdg2,/dev/sde2
      6. ARRAY /dev/md/2 level=raid1 metadata=1.2 num-devices=2 UUID=09d4b1a6:51ce7a4a:43736b37:acc34724 name=Hightower:2
      7. devices=/dev/sdg3,/dev/sde3
      8. ARRAY /dev/md/warehouse level=raid5 metadata=1.2 num-devices=5 UUID=cdf25234:31f75570:5a623191:15939f7a name=the-vault:warehouse
      9. spares=1 devices=/dev/sdf,/dev/sdi,/dev/sdc,/dev/sdb,/dev/sdd,/dev/sda


      This adds also disks used by the cluster.
      • Re-check

      Source Code

      1. cat /proc/mdstat
      2. Personalities : [raid1]
      3. md127 : inactive sdb[1](S) sdi[6](S) sdf[4](S) sdd[3](S)
      4. 7813534048 blocks super 1.2
      5. […]

      • Inactive, so... let's go

      Source Code

      1. mdadm -A /dev/md127
      2. mdadm: /dev/md127 not identified in config file.

      • Indeed md127 does not exist in mdadm.conf seen that option --examine talks about ARRAY /dev/md/warehouse
      • Off Course !!! mdadm.conf !!!

      Source Code

      1. mdadm -A /dev/md/warehouse
      2. mdadm: /dev/md/warehouse assembled from 2 drives - not enough to start the array.
      • 2 drives ???

      Source Code

      1. mdadm --stop /dev/md127
      2. mdadm: stopped /dev/md127
      3. cat /proc/mdstat
      4. Personalities : [raid1]
      5. md126 : inactive sda[0](S) sdc[5](S)
      6. 3906767024 blocks super 1.2
      7. [...]

      • md126 ???

      Source Code

      1. mdadm --stop /dev/md126
      2. mdadm: stopped /dev/md126
      • It's good ?We can go !!?

      Source Code

      1. mdadm -A /dev/md/warehouse
      2. mdadm: /dev/md/warehouse has been started with 5 drives and 1 spare.


      Whhhaaaaaaaaatttttttttt !!! No error messages ???!
      • Let's chek

      Source Code

      1. cat /proc/mdstat
      2. Personalities : [raid1] [raid6] [raid5] [raid4]
      3. md127 : active (auto-read-only) raid5 sda[0] sdi[6](S) sdf[4] sdd[3] sdc[5] sdb[1]
      4. 7813531648 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
      5. [...]


      Morality, the simplest solution is often the right solution !!

      Hope this can help other people in the same situation.

      Again many thanks for your help and your time !!


      I'm going to do my backups :)