Beiträge von Naabster

    @ryecoaaron, @omavoss


    I'm sorry, I'm a big newbie...


    At no time have I thought of using --assemble option !!! Yet the solution was there !!!

    • I even once the relaunch --examine option but with verbose mode
    Code
    mdadm --examine --scan -v
    ARRAY /dev/md/0 level=raid1 metadata=1.2 num-devices=2 UUID=4e557c08:58909801:fb622a57:d87dc65d name=Hightower:0
       devices=/dev/sdg1,/dev/sde1
    ARRAY /dev/md/1 level=raid1 metadata=1.2 num-devices=2 UUID=bb326408:2c33de89:c6f033c1:a1010c8f name=Hightower:1
       devices=/dev/sdg2,/dev/sde2
    ARRAY /dev/md/2 level=raid1 metadata=1.2 num-devices=2 UUID=09d4b1a6:51ce7a4a:43736b37:acc34724 name=Hightower:2
       devices=/dev/sdg3,/dev/sde3
    ARRAY /dev/md/warehouse level=raid5 metadata=1.2 num-devices=5 UUID=cdf25234:31f75570:5a623191:15939f7a name=the-vault:warehouse
       spares=1   devices=/dev/sdf,/dev/sdi,/dev/sdc,/dev/sdb,/dev/sdd,/dev/sda


    This adds also disks used by the cluster.

    • Re-check


    Code
    cat /proc/mdstat
    Personalities : [raid1]
    md127 : inactive sdb[1](S) sdi[6](S) sdf[4](S) sdd[3](S)
          7813534048 blocks super 1.2
    […]


    • Inactive, so... let's go
    Code
    mdadm -A /dev/md127
    mdadm: /dev/md127 not identified in config file.


    • Indeed md127 does not exist in mdadm.conf seen that option --examine talks about ARRAY /dev/md/warehouse
    • Off Course !!! mdadm.conf !!!
    Code
    mdadm -A /dev/md/warehouse
    mdadm: /dev/md/warehouse assembled from 2 drives - not enough to start the array.
    • 2 drives ???
    Code
    mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    
    
    cat /proc/mdstat
    Personalities : [raid1]
    md126 : inactive sda[0](S) sdc[5](S)
          3906767024 blocks super 1.2
    [...]


    • md126 ???
    Code
    mdadm --stop /dev/md126
    mdadm: stopped /dev/md126
    • It's good ?We can go !!?
    Code
    mdadm -A /dev/md/warehouse
    mdadm: /dev/md/warehouse has been started with 5 drives and 1 spare.


    Whhhaaaaaaaaatttttttttt !!! No error messages ???!

    • Let's chek
    Code
    cat /proc/mdstat
    Personalities : [raid1] [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sda[0] sdi[6](S) sdf[4] sdd[3] sdc[5] sdb[1]
          7813531648 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
    [...]


    Morality, the simplest solution is often the right solution !!


    Hope this can help other people in the same situation.


    Again many thanks for your help and your time !!



    I'm going to do my backups :)

    Code
    mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfgi]
    mdadm: looking for devices for /dev/md127
    mdadm: no RAID superblock on /dev/sdg
    mdadm: /dev/sdg has no superblock - assembly aborted


    Differents cominations it same for /dev/sde & /dev/sdg :
    - no superblock
    or
    - ressource is busy


    My last comand is :



    But i 'm to afraid to say YES ....

    Code
    mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfgi]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdd is busy - skipping
    mdadm: /dev/sdf is busy - skipping
    mdadm: Cannot assemble mbr metadata on /dev/sdg
    mdadm: /dev/sdg has no superblock - assembly aborted


    but there's better :


    Code
    cat /proc/mdstat 
    Personalities : [raid1] 
    md127 : inactive sdb[1](S) sdi[6](S) sdf[4](S) sdd[3](S)
          7813534048 blocks super 1.2
    
    md2 : active raid1 sde3[0] sdg3[1]
          195181440 blocks super 1.2 [2/2] [UU]
    
    md1 : active (auto-read-only) raid1 sde2[0] sdg2[1]
          46841728 blocks super 1.2 [2/2] [UU]

    @ryecoaaron


    The output with more test :


    @omavoss, @ryecoaaron, a big thank to track my problem !

    • The output of 'cat /proc/mdstat' :


    • The output of 'blkid' :


    • The output of 'fdisk -l' :


    While this can be useful, here is the output :

    • mdadm --detail --scan
    Code
    ARRAY /dev/md/0 metadata=1.2 name=Hightower:0 UUID=4e557c08:58909801:fb622a57:d87dc65d
    ARRAY /dev/md/1 metadata=1.2 name=Hightower:1 UUID=bb326408:2c33de89:c6f033c1:a1010c8f
    ARRAY /dev/md/2 metadata=1.2 name=Hightower:2 UUID=09d4b1a6:51ce7a4a:43736b37:acc34724


    • mdadm --examine --scan :
    Code
    ARRAY /dev/md/0 metadata=1.2 UUID=4e557c08:58909801:fb622a57:d87dc65d name=Hightower:0
    ARRAY /dev/md/1 metadata=1.2 UUID=bb326408:2c33de89:c6f033c1:a1010c8f name=Hightower:1
    ARRAY /dev/md/2 metadata=1.2 UUID=09d4b1a6:51ce7a4a:43736b37:acc34724 name=Hightower:2
    ARRAY /dev/md/warehouse metadata=1.2 UUID=cdf25234:31f75570:5a623191:15939f7a name=the-vault:warehouse
       spares=1


    my cluster is there, but I can not mount her
    Especially as the device "/dev/md/warehouse" does not exist, I do not understand where the command "mdadm --examine --scan" will look for the result ...


    Code
    ls -l /dev/md/
    total 0
    lrwxrwxrwx 1 root root 6 mars  16 16:43 0 -> ../md0
    lrwxrwxrwx 1 root root 6 mars  16 16:43 1 -> ../md1
    lrwxrwxrwx 1 root root 6 mars  16 16:43 2 -> ../md2

    I may (hope) have said something stupid ...


    In fact, all my disks are listed in physical disks:


    7 x 2 To + 2 x 250 Go, like the picture :


    When I go into managing raid, I just see that the volumes of the Debian installation. (2 x 250 Go in RAID 1 --> 2 Go /boot + 45 Go swap + 200 Go for / )



    But when I want to create a new volume, he sees only one disk 2 TB.



    Because actually, my cluster comprised 5 disks in RAID 5 + 1 spare, the last disk was there in case ... (not in cluster)


    I concluded that my cluster to be seen by the system but not through the web interface ....


    Suggestions to try mounting by hand?

    Hello,


    Let me open this topic because I have not found the information by searching.
    I present my apologies in advance if the answer was already in another thread.


    Here is my problem:


    My system disk that contains openmediavault is a Compact Flash card type, mounted on an IDE adapter. The rest of my records are a bunch of 7x2To raid 5.


    Now my system drive me let go. No more boot, no detection via live cd (clonezilla, deft, etc ...). No detection either by plugging the card on a compact flash drive. (Yes I think she's dead ...)


    Of course, I have no backup of the system disk ...


    My question is this:


    Is it possible to get my 7x2To cluster with a new installation on a new IDE drive ?
    Or is it possible to collect data on whether the relocation does not allow it ? (that the data are spread on the disc)


    In advance thank you for your answers.


    Version : 2.2.1 (Stone burner)



    PS: Please excuse my google translation to English.