Harddrive Failure and Data Recovery

    • OMV 4.x
    • Unless this really is a SnapRAID disaster, I suggest that you re-title the first post to more properly describe what happened.
      --
      Google is your friend and Bob's your uncle!

      RAID - Its ability to disappoint is inversely proportional to the user's understanding of it.

      ASRock Rack C2550D4I C0 Stepping - 16GB ECC - Silverstone DS380
    • SnapRAID didn't damage your files or your drives. From what I have read here you have one or more physically failing hard drives.
      --
      Google is your friend and Bob's your uncle!

      RAID - Its ability to disappoint is inversely proportional to the user's understanding of it.

      ASRock Rack C2550D4I C0 Stepping - 16GB ECC - Silverstone DS380
    • henfri wrote:

      To now restore your data:

      The scrub I would not do on the damaged drive, but on the copy that you did.

      Please shutdown, remove the broken drive.

      Put in another drive on which you can store the data we are going to restore. Can be an USB drive.

      Start again.

      Then, make sure that you properly identify the drive that we did copy the broken drive to first, in your omv-GUI (it may not be /dev/sda anymore). Lets say it is /dev/sdY

      Properly identify the restore target (e.g. the USB drive) format it with a filesystem you can read in windows and mount it. Remember the mount point (e.g. /media/restoredrive/)

      Unmount the drive in the OMV Gui, in case it is mounted

      run btrfs restore /dev/sdY /mnt/restoredrive | tee /restorelog.txt
      That will run a while. After it has finished you can check what was restored on /mnt/restoredrive or shutdown and check the content of the drive in windows.
      Post the restorelog.txt.

      If this does not bring the success, we have two other options (btrfs scrub and btrfsck). But this one for sure does not change any data, so it is the safest, if properly followed.
      OK. I'm back on task via your instructions. So I mounted the disk I'm going to restore to, but there is no other information about the mountpoint other than /dev/sdb. Is there a different mountpoint designation, and where would I find it?
    • henfri wrote:

      To now restore your data:

      The scrub I would not do on the damaged drive, but on the copy that you did.

      Please shutdown, remove the broken drive.

      Put in another drive on which you can store the data we are going to restore. Can be an USB drive.

      Start again.

      Then, make sure that you properly identify the drive that we did copy the broken drive to first, in your omv-GUI (it may not be /dev/sda anymore). Lets say it is /dev/sdY

      Properly identify the restore target (e.g. the USB drive) format it with a filesystem you can read in windows and mount it. Remember the mount point (e.g. /media/restoredrive/)

      Unmount the drive in the OMV Gui, in case it is mounted

      run btrfs restore /dev/sdY /mnt/restoredrive | tee /restorelog.txt
      That will run a while. After it has finished you can check what was restored on /mnt/restoredrive or shutdown and check the content of the drive in windows.
      Post the restorelog.txt.

      If this does not bring the success, we have two other options (btrfs scrub and btrfsck). But this one for sure does not change any data, so it is the safest, if properly followed.
      OK. I'm back on task via your instructions. So I mounted the disk I'm going to restore to, but there is no other information about the mountpoint other than /dev/sdb. Is there a different mountpoint designation, and where would I find it?

      ********************************************************
      HAH!! I went back into FileSystems and on a hunch (cause I haven't done this before in OMV) discovered that I could add a column. So I added the mountpoint column and can now proceed.

      What is a bit confusing in your item 7 instructions, is "Unmount the drive in the OMV Gui, in case it is mounted". Seemed odd since the previous step said to mount it. So I'm thinking, "well of course it's mounted". Did I miss something there?

      So in my case, the instruction would read like this?:
      btrfs restore /dev/sda /srv/dev-disk-by-label-NewDrive2 | tee /restorelog.txt

      I will do that.
      Did that and it just came back to the prompt. Should it lead with the word "run"?

      The post was edited 2 times, last by curious1 ().

    • Hello,

      sorry regarding the unclear instructions.
      The "to be restored" drive needed to be unmounted. The "to be restored" drive should not be the broken drive but it's copy, as we do not want to further stress it.
      The target (on which you restore) needed to be mounted.

      The restore should have taken hours. The "run" was an instruction to no, not to the machine ;)

      Please post the log.

      Had you now overwritten/formated any of our ddrescue attempt?

      What is shown if you
      btrfs filesystem info /dev/sdX (with the X being the ddrescued drive)

      Greetings,
      Hendrik
    • henfri wrote:

      Had you now overwritten/formated any of our ddrescue attempt?
      No, we're still good to go.

      henfri wrote:

      btrfs filesystem info /dev/sdX (with the X being the ddrescued drive)
      Current output:
      btrfs filesystem info /dev/sda
      btrfs filesystem: unknown token 'info'
      usage: btrfs filesystem [<group>] <command> [<args>]

      btrfs filesystem df [options] <path>
      Show space usage information for a mount point
      btrfs filesystem du [options] <path> [<path>..]
      Summarize disk usage of each file.
      btrfs filesystem show [options] [<path>|<uuid>|<device>|label]
      Show the structure of a filesystem
      btrfs filesystem sync <path>
      Force a sync on a filesystem
      btrfs filesystem defragment [options] <file>|<dir> [<file>|<dir>...]
      Defragment a file or a directory
      btrfs filesystem resize [devid:][+/-]<newsize>[kKmMgGtTpPeE]|[devid:]max <path>
      Resize a filesystem
      btrfs filesystem label [<device>|<mount_point>] [<newlabel>]
      Get or change the label of a filesystem
      btrfs filesystem usage [options] <path> [<path>..]
      Show detailed information about internal filesystem usage .


      overall filesystem tasks and information

      Is this the right command?
    • henfri wrote:

      btrfs filesystem info /dev/sdX (with the X being the ddrescued drive)

      henfri wrote:

      show, Not info
      btrfs filesystem show /dev/sda
      Label: 'sdadisk1' uuid: fdce5ae5-fd6d-46b9-8056-3ff15ce9fa16
      Total devices 1 FS bytes used 384.00KiB
      devid 1 size 931.51GiB used 2.04GiB path /dev/sda



      On the first
      Well, made sure the new drive was mounted and ran this command:

      btrfs restore /dev/sda /srv/dev-disk-by-label-NewDrive2 | tee /restorelog.txt

      Just came back to the prompt. Now what?

      henfri wrote:

      run btrfs restore /dev/sdY /mnt/restoredrive | tee /restorelog.txt
      Well, made sure the new drive was mounted and ran this command:

      btrfs restore /dev/sda /srv/dev-disk-by-label-NewDrive2 | tee /restorelog.txt

      It just came back to the prompt, so nothing happened here. Is the command right? I hadn't overwritten or formatted the original ddrescue drive (2TB drive) or affected it in any way. Within the GUI is shows that it's unmounted and the NewDrive2 is mounted. Should I try with that second ddrescue drive I did? I'm not doing anything until I hear back from you, Hendrik. :)
    • henfri wrote:

      Post the restorelog.txt
      Ran more /restorelog.txt ... returned nothing but the prompt.

      henfri wrote:

      ls /srv/dev-disk-by-label-NewDrive2
      mount /srv/dev-disk-by-label-NewDrive2
      ls /srv/dev-disk-by-label-NewDrive2 returned nothing
      mount /srv/dev-disk-by-label-NewDrive2 returned this:

      mount: /dev/sdb1 is already mounted or /srv/dev-disk-by-label-NewDrive2 busy
      /dev/sdb1 is already mounted on /srv/dev-disk-by-label-NewDrive2


      henfri wrote:

      dmesg again
      see attachment - just a massive list of mei_me listings
      Files
    • curious1 wrote:

      gderf wrote:

      SnapRAID didn't damage your files or your drives. From what I have read here you have one or more physically failing hard drives.
      How would "Harddrive Failure and Data Recovery" do for a title sound?
      Sounds good to me. Or perhaps you can work the word "hosed" in somewhere. LOL.
      --
      Google is your friend and Bob's your uncle!

      RAID - Its ability to disappoint is inversely proportional to the user's understanding of it.

      ASRock Rack C2550D4I C0 Stepping - 16GB ECC - Silverstone DS380
    • Users Online 1

      1 Guest