Harddrive Failure and Data Recovery

  • btrfs check /dev/sda


    Checking filesystem on /dev/sda
    UUID: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    checking extents
    checking free space cache
    cache and super generation don't match, space cache will be invalidated
    checking fs roots
    checking csums
    checking root refs
    found 131072 bytes used err is 0
    total csum bytes: 0
    total tree bytes: 131072
    total fs tree bytes: 32768
    total extent tree bytes: 16384
    btree space waste bytes: 123735
    file data blocks allocated: 0
    referenced 0

  • you can try the same again on the original:
    mount with the two options
    btrfs check
    btrfs info

    Original drive:


    btrfs check /dev/sda


    Checking filesystem on /dev/sda
    UUID: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    checking extents
    checking free space cache
    checking fs roots
    checking csums
    checking root refs
    found 393216 bytes used err is 0
    total csum bytes: 0
    total tree bytes: 131072
    total fs tree bytes: 32768
    total extent tree bytes: 16384
    btree space waste bytes: 124162
    file data blocks allocated: 262144
    referenced 262144


    I'm hoping you meant "btrfs filesystem show /dev/sda" as there is no "info" token.


    btrfs filesystem show /dev/sda


    Label: 'sdadisk1' uuid: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    Total devices 1 FS bytes used 384.00KiB
    devid 1 size 931.51GiB used 2.04GiB path /dev/sda



    As this was a SnapRaid drive, I'm dismayed that it is only showing 2.04GiB used. There should be at a minimum 300-400 GiB used. But even if we could get to that 2.04GiB, it would be better than nothing.


    Sigh ...

  • You did not try to mount (which I wrote) and scrub (which I did not write).


    Can you try that please?


    What do you mean by SnapRaid drive? There are two types: Data and Parity.
    Are we working on the Parity or Data?
    In any case, we can also try the other drive (if we are now on Parity, then we would do the data-drive now).
    You can try to mount it with the recovery option(s) and see with ls if there is data on it.


    If you do, please show me the output of dmesg -T| tail after mounting.


    Greetings,
    Hendrik

  • Hi Hendrik,
    This is where I'm "dumb" with linux things. When I put the original drive back in and did a look at btrfs, I thought it just automatically was in "mount" status, like where it was before. I'm so stupid.


    Anyway, I'll mount it and do it again.


    I have just always assumed we were working with the data drive, not the parity drive. Is there a way to tell if a drive is one or the other?


    I'll get going on the instructions ...


    So I ran: "mount -t btrfs -o recovery,nospace_cache,clear_cache /dev/sda /srv/test" and it returned to the prompt, which I would expect. Verified by running the command again and it said "already mounted".



    you can try the same again on the original:
    mount with the two options
    btrfs check
    btrfs info

    After re-reading this, I'm not sure I understand. Do I just run "btrfs check /dev/sda" and btrfs info /dev/sda"? That doesn't seem right, as you indicate they are options. (told you, I'm linux stupid. I have to keep going back to all of the posts in our thread to figure out what to type into the commands. Yeesh. Wish I was better at this.)
    I ran "btrfs check /dev/sda" and it said "/dev/sda is currentyly mounted. Aborting." So I must not being looking at the right instructions.

  • Hello,


    openmediavault may automatically mount the btrfs-drives. But we want to mount with the options recovery,nospace_cache,clear_cache,
    So, if omv mounts automatically, you need to unmount and mount manually with the options.
    Then you need to do an

    Code
    ls /srv/test





    to see if the data is there. Also you need to run the dmesg command



    Code
    dmesg -T| tail

    If the data is not there, we need to see what is wrong:



    Code
    btrfs scrub start /dev/sda
    
    
    btrfs scrub status /dev/sda

    When completed


    Code
    umount /dev/sda
    
    
    btrfs check /dev/sda


    All this you can repeat on each of the broken drives (disk1, disk2, ...).
    Based on the output of ls we can determine what drive is what. To be able to associate what happened on which drive, we need


    Code
    btrfs filesystem show /dev/sda

    Because that gives us the unique identifier of the drive, whereas /dev/sda can change on each boot to /dev/sdb. Thus, you always need to check what drive-"letter" currently is associated to the drive that you want to check. You get the label of the drive in the omv web-interface (disk1, disk2, ...)



    So,
    disk1 may be UUID: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    disk2 may be UUID: xxxxxx


    and so on.


    In the OMV Webinterface of Snapraid you should also find which disk[1,2,3] is parity and which is data.


    Greetings,
    Hendrik

  • Hi Hendrik,
    Here is the original disk (wonder if, in fact, this is the parity disk - I'm looking into that)


    unmounted /dev/sda


    mount -t btrfs -o recovery,nospace_cache,clear_cache /dev/sda /srv/test


    ls /srv/test (returned nothing)


    dmesg -T | tail
    [Fri Jan 17 17:17:41 2020] ata1: EH complete
    [Fri Jan 17 17:18:31 2020] logitech-hidpp-device 0003:046D:400A.0004: HID++ 2.0 device connected.
    [Sat Jan 18 11:40:29 2020] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): trying to use backup root at mount time
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): disabling disk space caching
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): force clearing of disk cache
    [Sun Jan 19 11:58:24 2020] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): trying to use backup root at mount time
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): disabling disk space caching
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): force clearing of disk cache


    btrfs scrub start /dev/sda
    scrub started on /dev/sda, fsid fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16 (pid=19881)


    btrfs scrub status /dev/sda
    scrub status for fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    scrub started at Sun Jan 19 12:03:35 2020 and finished after 00:00:00
    total bytes scrubbed: 256.00KiB with 0 errors


    umount /dev/sda


    btrfs check /dev/sda
    Checking filesystem on /dev/sda
    UUID: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    checking extents
    checking free space cache
    cache and super generation don't match, space cache will be invalidated
    checking fs roots
    checking csums
    checking root refs
    found 131072 bytes used err is 0
    total csum bytes: 0
    total tree bytes: 131072
    total fs tree bytes: 32768
    total extent tree bytes: 16384
    btree space waste bytes: 123986
    file data blocks allocated: 0
    referenced 0


    btrfs filesystem show /dev/sda
    Label: 'sdadisk1' uuid: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    Total devices 1 FS bytes used 128.00KiB
    devid 1 size 931.51GiB used 4.10GiB path /dev/sda



    Way back when we started this, I was directed to create a new boot, which I did. So SnapRaid is no longer part of this installation.

  • Hi Hendrik,
    Here is the original disk (wonder if, in fact, this is the parity disk - I'm looking into that)


    unmounted /dev/sda


    mount -t btrfs -o recovery,nospace_cache,clear_cache /dev/sda /srv/test


    ls /srv/test (returned nothing)


    dmesg -T | tail
    [Fri Jan 17 17:17:41 2020] ata1: EH complete
    [Fri Jan 17 17:18:31 2020] logitech-hidpp-device 0003:046D:400A.0004: HID++ 2.0 device connected.
    [Sat Jan 18 11:40:29 2020] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): trying to use backup root at mount time
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): disabling disk space caching
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): force clearing of disk cache
    [Sun Jan 19 11:58:24 2020] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): trying to use backup root at mount time
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): disabling disk space caching
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): force clearing of disk cache


    btrfs scrub start /dev/sda
    scrub started on /dev/sda, fsid fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16 (pid=19881)


    btrfs scrub status /dev/sda
    scrub status for fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    scrub started at Sun Jan 19 12:03:35 2020 and finished after 00:00:00
    total bytes scrubbed: 256.00KiB with 0 errors


    umount /dev/sda


    btrfs check /dev/sda
    Checking filesystem on /dev/sda
    UUID: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    checking extents
    checking free space cache
    cache and super generation don't match, space cache will be invalidated
    checking fs roots
    checking csums
    checking root refs
    found 131072 bytes used err is 0
    total csum bytes: 0
    total tree bytes: 131072
    total fs tree bytes: 32768
    total extent tree bytes: 16384
    btree space waste bytes: 123986
    file data blocks allocated: 0
    referenced 0


    btrfs filesystem show /dev/sda
    Label: 'sdadisk1' uuid: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    Total devices 1 FS bytes used 128.00KiB
    devid 1 size 931.51GiB used 4.10GiB path /dev/sda



    Way back when we started this, I was directed to create a new boot, which I did. So SnapRaid is no longer part of this installation.

  • You had two broken drives.
    We have now worked on the original and the copy of one.


    Please connect all the drives and do a

    Code
    btrfs fi show


    You will get something like


    Code
    Label: 'sdadisk'  uuid: 303256f2-6901-4564-892e-cdca9dda50e3
            Total devices 1 FS bytes used 17.88GiB
            devid    1 size 52.16GiB used 29.05GiB path /dev/sda
    Label: 'sdbdisk'  uuid: 1212312313-6901-4564-892e-cdca9dda50e3
            Total devices 1 FS bytes used 17.88GiB
            devid    1 size 52.16GiB used 29.05GiB path /dev/sdb
    Label: 'sdcdisk'  uuid: 34ff34234-6901-4564-892e-cdca9dda50e3
            Total devices 1 FS bytes used 17.88GiB
            devid    1 size 52.16GiB used 29.05GiB path /dev/sdc


    That should help you identify the drives. But also please post the output here.
    Can you please repeat (all) the steps that we last did on the other broken drive(s)?
    When mounting, please use these options:


    Code
    mount -t btrfs -o recovery,nospace_cache,clear_cache,subvolid=0 /dev/sda /srv/test

    After mounting, in addition to what you did before please also show me the output of



    Code
    btrfs subvolume list /mnt/test

    and of


    Code
    mount | grep /dev/sd


    Are you sure that you did not format the drive(s)?



    Regards,
    Hendrik

  • Hi Hendrik,


    I'll endeavor to work on all the drives. (I was away for a few days and just got back).


    Absolutely certain I did not format the drives.


    Can you please repeat (all) the steps that we last did on the other broken drive(s)?

    I'm going to need to put together a list of all the steps ... to make sure that I have it all in proper sequence (also, because of the nature of the forum postings, I have to jump back and forth to know which ones to use, then I get lost a little ... sorry). For instance, I think I understand that I'm not doing a ddrescue on any other drives at this point (??). Anyway, I'll take a shot at the proper sequence, and then list it here to get your 'thumbs up' before I proceed.


    Of course, unless you want to put that list together for me ... LOL. :D But I'll work on it.


    Steve

  • Hi Hendrick.


    I had a couple of things that took some intense focus from me, but am now looking to get back on this.


    Just a recap, I had 4 hardrives (one parity, and three data) that were in my Snapraid. Two of those drives won't even spin (those I may have to send to data recovery and take my chances - certainly can't do anything with them otherwise). Of the two that will spin, you and I have been working with only one drive (a Seagate) that so far hasn't given us much - certainly no data (so far). So it would seem that maybe we can get the second drive looked at (a Toshiba).


    So, I'll put both drives back into the box and do the following on each (xxx will be sda and sdb):


    • umount /dev/xxx
    • mount -t btrfs -o recovery,nospace_cache,clear_cache,subvolid=0 /dev/xxx /srv/test
    • btrfs subvolume list /mnt/test
    • mount | grep /dev/xxx
    • ls /srv/test
    • dmesg -T | tail
    • btrfs scrub start /dev/xxx
    • btrfs scrub status /dev/xxx
    • umount /dev/xxx
    • btrfs check /dev/xxx
    • btrfs filesystem show /dev/xxx
    • btrfs fi show


    Does this look right?


    I don't know if I mentioned this before, but maybe we should perform the ddrescue on the second drive (the Toshiba).


    Let me know if this list looks right, and I'll proceed.


    Thanks,
    Steve

  • Hello Hendrik,

    Wanted to let you know that I haven't given up on trying to get my data back. Just went through a large wave of distractions, AND an admittedly significant wave of Linux-fatigue. I'll get back to this in the near future - because I need it. But right now am working on getting rid of a small but significant set of lingering priorities. Once I have those settled - hopefully within the next few weeks - I'll be back on this recovery.


    I think the first thing I will do is go back through all of our communications and document all the commands that were tried before, and try to make sense out of all of it and create something of a "flow".


    Hope you're doing well and not impacted by COVID-19.

    Thanks

    Steve

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!