not sure where to post...... snapraid mystery

  • so i tried to simulate a dual drive fail

    simply pulled 2 drives and put in 2 new and did a fix....


    it said the rebuild was done... and so i did a diff between the old and new drives... i would expect snapraid to make the content the same as before for each of the 2 drives


    they are not indentical...


    root@bo-omv:~# ls /mnt/disk1

    aquota.group aquota.user Backup Emoncmsbackup homedir lost+found Pictures snapraid.content snapraid.content.lock Video

    root@bo-omv:~# ls /mnt/testdisk1

    aquota.group aquota.user homedir lost+found Pictures snapraid.content.lock Video

    root@bo-omv:~#

    also the hdd led is still going on like mad....

    if i try and do another fix i would expect it to say there is nothing to fix but it says snapraid is in use


    that suggest that its doing something in the background even after it says rebuild is done...

    can someone explain what is going on ?

  • In your snapraid.conf file what excludes are listed?


    When you ran the fix, did you log it?

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • In your snapraid.conf file what excludes are listed?


    When you ran the fix, did you log it?

    i use the web interface exclusive so i dont know what the content of the conf is

    and the fix ended with "END OF LINE" so i would expect that means its done



    but if you can tell me where the conf is i can ssh in and get the info from it

  • I suggest not using the web interface tools to do things like this, use the CLI instead.


    You can view the config file from the web interface.


    Also, was the array fully sync'd before you tried this?

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • From your initial post it looks like these two directories were not restored, is that correct?


    Backup

    Emoncmsbackup


    Since you have no log of the fix operation, and those directories were not excluded, I have no answer.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • just to check i understand it correct



    from cli i then run:

    snapraid -d sdb1_disk1 -l fix.log fix
    snapraid -d sdc1_disk2 -l fix2.log fix

    ?

    i have prepared 2 disks that have the same uuid, guid and part uuid as the "live" ones.. ie they should be 1:1 but clear of data

    with that i assume i can just pull the live drives and put the testers in

  • so i did sync and scrub again and a blkid:


    root@bo-omv:~# blkid

    /dev/sda1: UUID="95c89948-db55-4f6e-817b-6fb065d1946b" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="65676062-ad00-425d-860d-831d7c526ea8"

    /dev/sdc1: UUID="7d46260d-a71f-4138-8ab1-8ae5bac8e8d6" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="2da483bf-2476-7645-ada9-6c0761a12304"

    /dev/sde1: UUID="cdac1b82-2335-45cd-b7f6-7b6852429514" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9a3ac9ad-ba0b-4713-a993-e249b8ed6f69"

    /dev/sdd1: UUID="117385df-2092-4777-9a97-3ebd052d7581" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="3360cbb3-d0dc-4302-a504-fe6c06e2aa7b"

    /dev/sdb1: UUID="2e4e0ef5-a94a-4973-9ca2-acf2c23f1d24" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="6ae70f91-5cc1-4697-8633-cfb97a700fcf"

    /dev/sdf1: UUID="a42405ab-ec91-4140-b465-ed8e5a7ab7c6" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="c7c9adb0-01"

    /dev/sdf5: UUID="e245982a-4178-4d31-9e41-19fc4b4900cc" TYPE="swap" PARTUUID="c7c9adb0-05"



    removed the 2 drives and did blkid again

    root@bo-omv:~# blkid

    /dev/sda1: UUID="95c89948-db55-4f6e-817b-6fb065d1946b" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="65676062-ad00-425d-860d-831d7c526ea8"

    /dev/sde1: UUID="cdac1b82-2335-45cd-b7f6-7b6852429514" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9a3ac9ad-ba0b-4713-a993-e249b8ed6f69"

    /dev/sdd1: UUID="117385df-2092-4777-9a97-3ebd052d7581" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="3360cbb3-d0dc-4302-a504-fe6c06e2aa7b"

    /dev/sdf1: UUID="a42405ab-ec91-4140-b465-ed8e5a7ab7c6" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="c7c9adb0-01"

    /dev/sdf5: UUID="e245982a-4178-4d31-9e41-19fc4b4900cc" TYPE="swap" PARTUUID="c7c9adb0-05"



    then inserted the 2 blank drives, but otherwise they have the same uuid, guid and part uuid so they should in effect be the same but just blank

    did blkid again and they show up as before just at the end of the list

    root@bo-omv:~# blkid

    /dev/sda1: UUID="95c89948-db55-4f6e-817b-6fb065d1946b" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="65676062-ad00-425d-860d-831d7c526ea8"

    /dev/sde1: UUID="cdac1b82-2335-45cd-b7f6-7b6852429514" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9a3ac9ad-ba0b-4713-a993-e249b8ed6f69"

    /dev/sdd1: UUID="117385df-2092-4777-9a97-3ebd052d7581" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="3360cbb3-d0dc-4302-a504-fe6c06e2aa7b"

    /dev/sdf1: UUID="a42405ab-ec91-4140-b465-ed8e5a7ab7c6" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="c7c9adb0-01"

    /dev/sdf5: UUID="e245982a-4178-4d31-9e41-19fc4b4900cc" TYPE="swap" PARTUUID="c7c9adb0-05"

    /dev/sdb1: UUID="2e4e0ef5-a94a-4973-9ca2-acf2c23f1d24" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="6ae70f91-5cc1-4697-8633-cfb97a700fcf"

    /dev/sdc1: UUID="7d46260d-a71f-4138-8ab1-8ae5bac8e8d6" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="2da483bf-2476-7645-ada9-6c0761a12304"



    doing fix of the first say they are on the same device??

    root@bo-omv:~# snapraid -d sdb1_disk1 -l fix.log fix

    Self test...

    Disks '/srv/dev-disk-by-uuid-2e4e0ef5-a94a-4973-9ca2-acf2c23f1d24/' and '/srv/dev-disk-by-uuid-7d46260d-a71f-4138-8ab1-8ae5bac8e8d6/' are on the same device.

    You can 'fix' anyway, using 'snapraid --force-device fix'.



    the blkid above shows the 2 uuid's on seperate disks so what did i do wrong here?

  • inside omv it seems the 2 disks are gone...


    i try and edit the drives in the plugin and they are gone....


    mount -a did the trick this time, but a lot of unrecoverable's..

    will let it run and post the log's...

    seems i have some changes to do.... ie i do not like unrecoverables

  • just for completeness sake my config... i commented out the excludes before i did the sync and scrub

    snapraid touch

    snapraid sync -h
    snapraid -p 100 -o 1 scrub


  • holy.... th log is 500 mb



    here is the first part of it


    • Offizieller Beitrag

    strategy_error:0: No strategy to recover from 2 failures with 1 parity with hash

    but https://www.snapraid.it/faq#howmanypar says 1 parity drive covers 2-4 data disk

    so with 2 down should i not be able to recover those?

    I think you have misunderstood.


    1 parity disk can recover from 1 disk failure

    2 parity disks can recover from the failure of 2 disks

    .

    .


    For data sets from 2 to 4 disks -> 1 parity disk may be enough

    For parity sets of more than 4 disks you should have at least two parity disks.


    According to the image that you have published in post 12 your assembly has 4 data disks and 1 parity disk. Therefore you can only recover from a disk failure. If you want recovery from two drive failure you need to add another parity drive.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!