Beiträge von curious1

    Hello Hendrik,

    Wanted to let you know that I haven't given up on trying to get my data back. Just went through a large wave of distractions, AND an admittedly significant wave of Linux-fatigue. I'll get back to this in the near future - because I need it. But right now am working on getting rid of a small but significant set of lingering priorities. Once I have those settled - hopefully within the next few weeks - I'll be back on this recovery.


    I think the first thing I will do is go back through all of our communications and document all the commands that were tried before, and try to make sense out of all of it and create something of a "flow".


    Hope you're doing well and not impacted by COVID-19.

    Thanks

    Steve

    Hi Hendrick.


    I had a couple of things that took some intense focus from me, but am now looking to get back on this.


    Just a recap, I had 4 hardrives (one parity, and three data) that were in my Snapraid. Two of those drives won't even spin (those I may have to send to data recovery and take my chances - certainly can't do anything with them otherwise). Of the two that will spin, you and I have been working with only one drive (a Seagate) that so far hasn't given us much - certainly no data (so far). So it would seem that maybe we can get the second drive looked at (a Toshiba).


    So, I'll put both drives back into the box and do the following on each (xxx will be sda and sdb):


    • umount /dev/xxx
    • mount -t btrfs -o recovery,nospace_cache,clear_cache,subvolid=0 /dev/xxx /srv/test
    • btrfs subvolume list /mnt/test
    • mount | grep /dev/xxx
    • ls /srv/test
    • dmesg -T | tail
    • btrfs scrub start /dev/xxx
    • btrfs scrub status /dev/xxx
    • umount /dev/xxx
    • btrfs check /dev/xxx
    • btrfs filesystem show /dev/xxx
    • btrfs fi show


    Does this look right?


    I don't know if I mentioned this before, but maybe we should perform the ddrescue on the second drive (the Toshiba).


    Let me know if this list looks right, and I'll proceed.


    Thanks,
    Steve

    Hi Hendrik,


    I'll endeavor to work on all the drives. (I was away for a few days and just got back).


    Absolutely certain I did not format the drives.


    Can you please repeat (all) the steps that we last did on the other broken drive(s)?

    I'm going to need to put together a list of all the steps ... to make sure that I have it all in proper sequence (also, because of the nature of the forum postings, I have to jump back and forth to know which ones to use, then I get lost a little ... sorry). For instance, I think I understand that I'm not doing a ddrescue on any other drives at this point (??). Anyway, I'll take a shot at the proper sequence, and then list it here to get your 'thumbs up' before I proceed.


    Of course, unless you want to put that list together for me ... LOL. :D But I'll work on it.


    Steve

    Hi Hendrik,
    Here is the original disk (wonder if, in fact, this is the parity disk - I'm looking into that)


    unmounted /dev/sda


    mount -t btrfs -o recovery,nospace_cache,clear_cache /dev/sda /srv/test


    ls /srv/test (returned nothing)


    dmesg -T | tail
    [Fri Jan 17 17:17:41 2020] ata1: EH complete
    [Fri Jan 17 17:18:31 2020] logitech-hidpp-device 0003:046D:400A.0004: HID++ 2.0 device connected.
    [Sat Jan 18 11:40:29 2020] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): trying to use backup root at mount time
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): disabling disk space caching
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): force clearing of disk cache
    [Sun Jan 19 11:58:24 2020] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): trying to use backup root at mount time
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): disabling disk space caching
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): force clearing of disk cache


    btrfs scrub start /dev/sda
    scrub started on /dev/sda, fsid fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16 (pid=19881)


    btrfs scrub status /dev/sda
    scrub status for fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    scrub started at Sun Jan 19 12:03:35 2020 and finished after 00:00:00
    total bytes scrubbed: 256.00KiB with 0 errors


    umount /dev/sda


    btrfs check /dev/sda
    Checking filesystem on /dev/sda
    UUID: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    checking extents
    checking free space cache
    cache and super generation don't match, space cache will be invalidated
    checking fs roots
    checking csums
    checking root refs
    found 131072 bytes used err is 0
    total csum bytes: 0
    total tree bytes: 131072
    total fs tree bytes: 32768
    total extent tree bytes: 16384
    btree space waste bytes: 123986
    file data blocks allocated: 0
    referenced 0


    btrfs filesystem show /dev/sda
    Label: 'sdadisk1' uuid: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    Total devices 1 FS bytes used 128.00KiB
    devid 1 size 931.51GiB used 4.10GiB path /dev/sda



    Way back when we started this, I was directed to create a new boot, which I did. So SnapRaid is no longer part of this installation.

    Hi Hendrik,
    Here is the original disk (wonder if, in fact, this is the parity disk - I'm looking into that)


    unmounted /dev/sda


    mount -t btrfs -o recovery,nospace_cache,clear_cache /dev/sda /srv/test


    ls /srv/test (returned nothing)


    dmesg -T | tail
    [Fri Jan 17 17:17:41 2020] ata1: EH complete
    [Fri Jan 17 17:18:31 2020] logitech-hidpp-device 0003:046D:400A.0004: HID++ 2.0 device connected.
    [Sat Jan 18 11:40:29 2020] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): trying to use backup root at mount time
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): disabling disk space caching
    [Sat Jan 18 11:40:29 2020] BTRFS info (device sda): force clearing of disk cache
    [Sun Jan 19 11:58:24 2020] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): trying to use backup root at mount time
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): disabling disk space caching
    [Sun Jan 19 11:58:24 2020] BTRFS info (device sda): force clearing of disk cache


    btrfs scrub start /dev/sda
    scrub started on /dev/sda, fsid fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16 (pid=19881)


    btrfs scrub status /dev/sda
    scrub status for fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    scrub started at Sun Jan 19 12:03:35 2020 and finished after 00:00:00
    total bytes scrubbed: 256.00KiB with 0 errors


    umount /dev/sda


    btrfs check /dev/sda
    Checking filesystem on /dev/sda
    UUID: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    checking extents
    checking free space cache
    cache and super generation don't match, space cache will be invalidated
    checking fs roots
    checking csums
    checking root refs
    found 131072 bytes used err is 0
    total csum bytes: 0
    total tree bytes: 131072
    total fs tree bytes: 32768
    total extent tree bytes: 16384
    btree space waste bytes: 123986
    file data blocks allocated: 0
    referenced 0


    btrfs filesystem show /dev/sda
    Label: 'sdadisk1' uuid: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    Total devices 1 FS bytes used 128.00KiB
    devid 1 size 931.51GiB used 4.10GiB path /dev/sda



    Way back when we started this, I was directed to create a new boot, which I did. So SnapRaid is no longer part of this installation.

    Hi Hendrik,
    This is where I'm "dumb" with linux things. When I put the original drive back in and did a look at btrfs, I thought it just automatically was in "mount" status, like where it was before. I'm so stupid.


    Anyway, I'll mount it and do it again.


    I have just always assumed we were working with the data drive, not the parity drive. Is there a way to tell if a drive is one or the other?


    I'll get going on the instructions ...


    So I ran: "mount -t btrfs -o recovery,nospace_cache,clear_cache /dev/sda /srv/test" and it returned to the prompt, which I would expect. Verified by running the command again and it said "already mounted".



    you can try the same again on the original:
    mount with the two options
    btrfs check
    btrfs info

    After re-reading this, I'm not sure I understand. Do I just run "btrfs check /dev/sda" and btrfs info /dev/sda"? That doesn't seem right, as you indicate they are options. (told you, I'm linux stupid. I have to keep going back to all of the posts in our thread to figure out what to type into the commands. Yeesh. Wish I was better at this.)
    I ran "btrfs check /dev/sda" and it said "/dev/sda is currentyly mounted. Aborting." So I must not being looking at the right instructions.

    you can try the same again on the original:
    mount with the two options
    btrfs check
    btrfs info

    Original drive:


    btrfs check /dev/sda


    Checking filesystem on /dev/sda
    UUID: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    checking extents
    checking free space cache
    checking fs roots
    checking csums
    checking root refs
    found 393216 bytes used err is 0
    total csum bytes: 0
    total tree bytes: 131072
    total fs tree bytes: 32768
    total extent tree bytes: 16384
    btree space waste bytes: 124162
    file data blocks allocated: 262144
    referenced 262144


    I'm hoping you meant "btrfs filesystem show /dev/sda" as there is no "info" token.


    btrfs filesystem show /dev/sda


    Label: 'sdadisk1' uuid: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    Total devices 1 FS bytes used 384.00KiB
    devid 1 size 931.51GiB used 2.04GiB path /dev/sda



    As this was a SnapRaid drive, I'm dismayed that it is only showing 2.04GiB used. There should be at a minimum 300-400 GiB used. But even if we could get to that 2.04GiB, it would be better than nothing.


    Sigh ...

    btrfs check /dev/sda


    Checking filesystem on /dev/sda
    UUID: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    checking extents
    checking free space cache
    cache and super generation don't match, space cache will be invalidated
    checking fs roots
    checking csums
    checking root refs
    found 131072 bytes used err is 0
    total csum bytes: 0
    total tree bytes: 131072
    total fs tree bytes: 32768
    total extent tree bytes: 16384
    btree space waste bytes: 123735
    file data blocks allocated: 0
    referenced 0

    Just ran the scrub status. Don't think anything is happening.


    btrfs scrub status /srv/test


    scrub status for fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    scrub started at Sun Jan 12 17:09:53 2020 and finished after 00:00:00
    total bytes scrubbed: 256.00KiB with 0 errors


    "ls /srv/test" produced no results

    I find that if I take a long time in writing my reply, the forum will throw such an error.


    I used to copy my post to the clipboard just before submitting it so that I could post it again if something really went wrong, but so far I have never needed to use the clipboard contents. The post always goes thru.

    Yeah, that's what I have started doing as well. Hadn't figured it as a timing issue, but that does make sense. If it is a timing issue, then whoever maintains the forum should look into this and provide a different result than just throwing up an error.


    Thanks

    Many times when replying to a post, I get a pop-up titled "Error Encountered" with information that says, "The server encountered an unresolvable problem, please try again later. Exception ID: da2ec1cab3d56473594d2515a4bf43501e5e037e"


    And yet, the post is there when I reload the page. And then usually the post is there. Sometimes it'll be there and right into Edit mode. Anybody know what's going on?

    1. umount /dev/sda entered in the commandline
    2. Perform: "mount -t btrfs -o recovery,nospace_cache,clear_cache /dev/sda /srv/test"
    3. Perform: "btrfs scrub start /srv/test"
    4. dmesg as before

    1 through 3 performed successfully.
    Scrub returned: "scrub started on /srv/test, fsid fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16 (pid=11761)"


    dmesg | grep -i btrfs:
    "[ 2.780953] Btrfs loaded, crc32c=crc32c-intel
    [ 2.794045] BTRFS: device label NewDrive2 devid 1 transid 16 /dev/sdb1
    [ 2.794691] BTRFS: device fsid c81bf277-6ded-4e03-8bce-d4b25a690e27 devid 1 transid 9 /dev/sda1
    [ 11.574160] BTRFS: device label sdadisk1 devid 1 transid 11 /dev/sda
    [ 12.646572] BTRFS info (device sdb1): disk space caching is enabled
    [ 12.646573] BTRFS info (device sdb1): has skinny extents
    [ 1340.396263] BTRFS info (device sda1): disk space caching is enabled
    [ 1340.396264] BTRFS info (device sda1): has skinny extents
    [ 1340.402929] BTRFS error (device sda1): bad tree block start, want 20987904 have 0
    [ 1340.406046] BTRFS error (device sda1): bad tree block start, want 20987904 have 0
    [ 1340.406097] BTRFS error (device sda1): failed to read chunk root
    [ 1340.425705] BTRFS error (device sda1): open_ctree failed
    [ 1479.576285] BTRFS info (device sda1): disk space caching is enabled
    [ 1479.576287] BTRFS info (device sda1): has skinny extents
    [ 1479.577153] BTRFS error (device sda1): bad tree block start, want 20987904 have 0
    [ 1479.577350] BTRFS error (device sda1): bad tree block start, want 20987904 have 0
    [ 1479.577395] BTRFS error (device sda1): failed to read chunk root
    [ 1479.593620] BTRFS error (device sda1): open_ctree failed
    [76455.155515] BTRFS info (device sda): unrecognized mount option 'rootflags=recovery'
    [76455.155592] BTRFS error (device sda): open_ctree failed
    [76756.435464] BTRFS info (device sda): unrecognized mount option 'rootflags=recovery'
    [76756.435539] BTRFS error (device sda): open_ctree failed
    [76787.189105] BTRFS info (device sda): unrecognized mount option 'rootflags=recovery'
    [76787.189184] BTRFS error (device sda): open_ctree failed
    [317998.891158] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [317998.891163] BTRFS info (device sda): trying to use backup root at mount time
    [317998.891170] BTRFS info (device sda): disabling disk space caching
    [423681.254949] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [423681.254954] BTRFS info (device sda): trying to use backup root at mount time
    [423681.254961] BTRFS info (device sda): disabling disk space caching
    [423681.254964] BTRFS info (device sda): force clearing of disk cache"


    Runs for 24hr or so?


    Thanks Hendrik,
    Steve


    p.s. I'm curious about that "unrecognized mount option 'rootflags=recovery'". Do we still have a bad syntax?

    dmesg | grep -i btrfs

    [ 2.780953] Btrfs loaded, crc32c=crc32c-intel
    [ 2.794045] BTRFS: device label NewDrive2 devid 1 transid 16 /dev/sdb1
    [ 2.794691] BTRFS: device fsid c81bf277-6ded-4e03-8bce-d4b25a690e27 devid 1 transid 9 /dev/sda1
    [ 11.574160] BTRFS: device label sdadisk1 devid 1 transid 11 /dev/sda
    [ 12.646572] BTRFS info (device sdb1): disk space caching is enabled
    [ 12.646573] BTRFS info (device sdb1): has skinny extents
    [ 1340.396263] BTRFS info (device sda1): disk space caching is enabled
    [ 1340.396264] BTRFS info (device sda1): has skinny extents
    [ 1340.402929] BTRFS error (device sda1): bad tree block start, want 20987904 have 0
    [ 1340.406046] BTRFS error (device sda1): bad tree block start, want 20987904 have 0
    [ 1340.406097] BTRFS error (device sda1): failed to read chunk root
    [ 1340.425705] BTRFS error (device sda1): open_ctree failed
    [ 1479.576285] BTRFS info (device sda1): disk space caching is enabled
    [ 1479.576287] BTRFS info (device sda1): has skinny extents
    [ 1479.577153] BTRFS error (device sda1): bad tree block start, want 20987904 have 0
    [ 1479.577350] BTRFS error (device sda1): bad tree block start, want 20987904 have 0
    [ 1479.577395] BTRFS error (device sda1): failed to read chunk root
    [ 1479.593620] BTRFS error (device sda1): open_ctree failed
    [76455.155515] BTRFS info (device sda): unrecognized mount option 'rootflags=recovery'
    [76455.155592] BTRFS error (device sda): open_ctree failed
    [76756.435464] BTRFS info (device sda): unrecognized mount option 'rootflags=recovery'
    [76756.435539] BTRFS error (device sda): open_ctree failed
    [76787.189105] BTRFS info (device sda): unrecognized mount option 'rootflags=recovery'
    [76787.189184] BTRFS error (device sda): open_ctree failed
    [317998.891158] BTRFS warning (device sda): 'recovery' is deprecated, use 'usebackuproot' instead
    [317998.891163] BTRFS info (device sda): trying to use backup root at mount time
    [317998.891170] BTRFS info (device sda): disabling disk space caching



    Try unmounting and then try the Second command and a scrub.

    Uhhh ... I'm still not that adept. Do I unmount using the Web client? And which second command are you meaning? Should I assume the following sequence?:


    1. Unmount /dev/sda using the web client
    2. Perform: "mount -t btrfs -o recovery,nospace_cache,clear_cache /dev/sda /srv/test"
    3. Perform: "btrfs scrub start /srv/test"


    Is that what you want me to do?


    Thanks
    Steve

    mount -t btrfs -o recovery,nospace_cache /dev/sda /srv/test


    and if that does not work


    mount -t btrfs -o recovery,nospace_cache,clear_cache /dev/sda /srv/test

    The first mount command just came back to the prompt.


    The second mount command produced the following:
    "mount: /dev/sda is already mounted or /srv/test busy
    /dev/sda is already mounted on /srv/test"


    So then I went back and looked at previous instructions that didn't seem to work. I ran "mount" and got this:
    "sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
    proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
    udev on /dev type devtmpfs (rw,nosuid,relatime,size=10212176k,nr_inodes=2553044,mode=755)
    devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
    tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=2046264k,mode=755)
    /dev/sdc1 on / type ext4 (rw,relatime,errors=remount-ro)
    securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
    tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
    tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
    tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
    cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/
    systemd-cgroups-agent,name=systemd)
    pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
    cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
    cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
    cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
    cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
    cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
    cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
    cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
    cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
    cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
    cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
    systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,dir
    ect,pipe_ino=1455)
    mqueue on /dev/mqueue type mqueue (rw,relatime)
    hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
    debugfs on /sys/kernel/debug type debugfs (rw,relatime)
    sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
    tmpfs on /tmp type tmpfs (rw,relatime)
    /dev/sdb1 on /srv/dev-disk-by-label-NewDrive2 type btrfs (rw,relatime,space_cache,subvolid=5,subvol=/)
    /dev/sda on /srv/test type btrfs (rw,relatime,nospace_cache,subvolid=5,subvol=/)"


    So, given that the drive is mounted, I went back to what you were trying to have me do before after having me mount the copy drive. So I have run "btrfs scrub start /srv/test". It came back with this:
    "scrub started on /srv/test, fsid fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16 (pid=11445)"
    So it looks like we have successfully started the scrub that you wanted me to do back on December 22nd. You had said that I could check the progress with "btrfs scrub status /srv/test". You indicated this could run for quite awhile, maybe 24hrs. Here is the status:


    "scrub status for fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    scrub started at Sat Jan 11 11:51:03 2020 and finished after 00:00:00
    total bytes scrubbed: 512.00KiB with 0 errors"


    So after 10 minutes, this is the btrfs scrub status /srv/test
    "scrub status for fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    scrub started at Sat Jan 11 11:51:03 2020 and finished after 00:00:00
    total bytes scrubbed: 512.00KiB with 0 errors"


    It would appear to me that it's finished. Or does the "finished after 00:00:00" indicated it's still working?


    If I do a "ls /srv/test", I get no results.


    So, I await your command :) ...


    Steve

    If you do the same with the original: Is the output the same?

    Didn't do it to the original yet. Haven't reinstalled the original into the system.


    Is 2.04GiB used less than expected?

    I would say, considerably less than expected.


    mount | grep sda

    Yep. Returned nothing.


    mount -t btrfs -o rootflags=recovery,nospace_cache /dev/sda /srv/test

    Returned:
    mount: wrong fs type, bad option, bad superblock on /dev/sda,
    missing codepage or helper program, or other error


    In some cases useful info is found in syslog - try
    dmesg | tail or so.


    mount -t btrfs -o rootflags=recovery,nospace_cache,clear_cache /dev/sda /srv/test

    mount: wrong fs type, bad option, bad superblock on /dev/sda,
    missing codepage or helper program, or other error


    In some cases useful info is found in syslog - try
    dmesg | tail or so.


    So I ran "dmesg | tail" and here are the results:


    [ 1479.577153] BTRFS error (device sda1): bad tree block start, want 20987904 have 0
    [ 1479.577350] BTRFS error (device sda1): bad tree block start, want 20987904 have 0
    [ 1479.577395] BTRFS error (device sda1): failed to read chunk root
    [ 1479.593620] BTRFS error (device sda1): open_ctree failed
    [76455.155515] BTRFS info (device sda): unrecognized mount option 'rootflags=recovery'
    [76455.155592] BTRFS error (device sda): open_ctree failed
    [76756.435464] BTRFS info (device sda): unrecognized mount option 'rootflags=recovery'
    [76756.435539] BTRFS error (device sda): open_ctree failed
    [76787.189105] BTRFS info (device sda): unrecognized mount option 'rootflags=recovery'
    [76787.189184] BTRFS error (device sda): open_ctree failed


    Should I change to the original drive and try the "btrfs filesystem show /dev/sda"?

    Hi. I'm back from my hiatus. Ready to get back into it.

    On the current (copied) drive. It must be mounted for that.


    btrfs scrub start /srv/test

    I tried that first command from option B, but it errored with "not a btrfs filesystem: /srv/test".
    Then I realized I needed to mount the "copy" drive, which I tried to do, but got a series of:


    BTRFS error (device sda1): bad tree block start, want 209870=904, have 0
    BTRFS error (device sda1): failed to read chunk root
    BTRFS error (device sda1): open_ctree failed


    In the OMV File Systems, it says that sda1 is BTRFS, but has no Total or Available memory.


    Here's btrfs filesystem show /dev/sda


    Label: 'sdadisk1' uuid: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
    Total devices 1 FS bytes used 384.00KiB
    devid 1 size 931.51GiB used 2.04GiB path /dev/sda


    I'm guessing that somewhere along the line, I didn't get this thing properly formatted and/or ddrescued or something. Seems like I might need to start that process over, although I can see from the "show" that 2.04GiB have been used on that disk. Seems like I did something wrong along the line. I know it would stress the original drive to do another ddrescue, but I'll wait for your assessment.


    Staying tuned ...