Accidentally attempted to re-create the filesystem on Data Disks.. Looking for help to recover

  • Hello All,

    While I'm new to the community, I have been an OMV user for a couple years now. I was running on OMV3.x and upgraded to 4.x a few months back.


    I wanted to have the system upgraded to latest version (6.x) and that's where the trouble started.

    I'm using the OMV off from a USB with 2*4TB data disks setup in RAID1.

    Since I couldn't get my hands on to the method of upgrading from 4.x to 5.x to 6.x, I installed 6.x in a new USB stick and booted the system up.

    It booted all Okay, but didn't detect the File System. Didn't know if I had to create a new FS or while creating, it would detect the existing one or what exactly, I attempted to create a new FS on RAID1 device.

    When it started writing, I feared data loss and forcefully switched off the system. Then plugged back in the old USB stick with 4.x OMV and now everything is messed up. My data is no more accessible.

    First, RAID was showing in re-sync (waiting) state, so I ran 'mdadm --readwrite /dev/md0' which sort of fixed the RAID error.

    Then for the FS, I ran fsck /dev/md0 and it fixed something (don't know what exactly it did though, I kept saying "yes" to all questions.)

    Current state, it shows the disk mounted but nothing beyond that.

    I sort of feel like the Data is there on disks (not overwritten or deleted) coz of below output but don't know how to fix the problem:


    root@signtrigger-nas-prod:~# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 7.8G 0 7.8G 0% /dev

    tmpfs 1.6G 9.2M 1.6G 1% /run

    /dev/sdc1 14G 2.2G 11G 17% /

    tmpfs 7.8G 0 7.8G 0% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup

    tmpfs 7.8G 0 7.8G 0% /tmp

    /dev/md0 3.6T 2.2T 1.5T 61% /srv/dev-disk-by-id-md-name-signtrigger-nas-prod-0



    I wanted to try the 'testdisk' utility to fix this, but thought it was better to check if anyone could suggest right method to go from here.

  • deep7861

    Hat den Titel des Themas von „Messed up a bit.. Looking for urgent help“ zu „Accidentally attempted to re-create the filesystem on Data Disks.. Looking for help to recover“ geändert.
  • Hi, Could I please get some help here? I'm seriously going to lose all the critical data of many years, without fixing it.... I wish to have gone back in time and never do the action, but isn't possible!

  • Thanks ananas .. but I have never used testdisk in past.. so wanted some instructions on how to not mess things further up.

    What I'm thinking is - to break the RAID configuration and then try to recover from one disk. I can use the other disk to write to..


    But this is just a thought for now. I could use some help in steps for getting it done (if the thought is right)

  • Or let me share where I stopped - I tried using testdisk /dev/md0 (for RAID disk). It selected the disk and on select partition screen, by default 'none' was selected. There was also a warning for not selecting none. So, I exited the program.


    What partition type should I select? What does this signify? Do we select what was there on the disk before or what exactly is it?

  • What I'm thinking is - to break the RAID configuration and then try to recover from one disk.

    Please, stop doing things you don't have a clue and wait for some input from someone.


    Instead of just doing things that will probably do more damage than good.


    If geaves have any idea how you can salvage your RAID, he'll do it.


    Its because of story's like yours that noone here advises users to use RAID system.

    Or, at least, to learn really well how to get out of troubles and to have a GOOD, SOLID backup of the DATA.

    • Offizieller Beitrag

    If geaves have any idea how you can salvage your RAID, he'll do it.

    Thanks Soma:) I did look at this when it was posted and would approach this differently to ananas


    The question is why use testdisk at this moment in time there is no confirmation that the data is missing or corrupted on the array and for deep7861 there is a step by step guide on the testdisk website.


    If this was me I would re connect the usb with V6 on it and reboot the system the array should come up and so should the file system, if the array doesn't display then a simple cat /proc/mdstat would return it's state. If the array comes up but the file system doesn't then a simple mount -a should correct that.


    If that works then it's a case of recreating the shares that reside on the array (omv does not recreate these).


    Another option of simply checking if the data is still on the array is to install midnight commander, the array and the data will be under /srv

  • Thanks Soma & geaves for helping out.

    I know I messed up in attempt to achieve something else and that's why looking for help, least to make sure further things are not messed up and I can recover/restore the data. I do appreciate your time and help, really. :)


    Here is the output of /proc/mdstat:

    root@signtrigger-nas-prod:~# cat /proc/mdstat

    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]

    md0 : active raid1 sda1[0] sdb1[2]

    3906886464 blocks super 1.2 [2/2] [UU]

    bitmap: 0/30 pages [0KB], 65536KB chunk


    unused devices: <none>

    root@signtrigger-nas-prod:~#



    I had done mount -a last time and it did mount the /dev/md0 to /srv/ and it shows the used/available capacity to what it was.. but the directory only contains lost+found:


    root@signtrigger-nas-prod:~# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 7.8G 0 7.8G 0% /dev

    tmpfs 1.6G 18M 1.6G 2% /run

    /dev/sdc1 14G 2.2G 11G 17% /

    tmpfs 7.8G 0 7.8G 0% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup

    tmpfs 7.8G 8.0K 7.8G 1% /tmp

    /dev/md0 3.6T 2.2T 1.5T 61% /srv/dev-disk-by-id-md-name-signtrigger-nas-prod-0


    root@signtrigger-nas-prod:/srv/dev-disk-by-id-md-name-signtrigger-nas-prod-0# ls -la

    total 16

    drwxr-xr-x 3 root root 4096 Nov 19 23:05 .

    drwxr-xr-x 4 root root 4096 Oct 11 2017 ..

    drwx------ 390 root root 8192 Nov 19 23:05 lost+found

    root@signtrigger-nas-prod:/srv/dev-disk-by-id-md-name-signtrigger-nas-prod-0#


    I'd wait for further help on this matter. Thanks in advance :)

    • Offizieller Beitrag

    I'd wait for further help on this matter

    :?:  ananas has already stated that you need to ensure you copy recovered files to another drive I have pointed you to the step by step guide on the testdisk site, what further help do you require

  • geaves Thanks for the guide.

    You mentioned to approach it differently and I shared the outputs of the commands you suggested. If testdisk is the only option from here, I'd try using it.

    Only question left was related to partition type on that. When I ran testdisk, it was default selecting 'none' but 'none' also gives a warning. (the instructions say to go with default selection). So, what option is right to select here?


    • Offizieller Beitrag

    When I ran testdisk, it was default selecting 'none' but 'none' also gives a warning

    That could suggest that testdisk was unable to determine the partition type, and the warning explains why not to select none, which makes sense, so looking at the menu which one do you think it should be

  • I tried using testdisk on the this setup and couldn't get too far.

    To make sure I knew how it works and undeletes files, I tried to recover files from another USB stick and it worked.

    But on the RAID setup, these are the options showing up:



    None of the given options lead to any undelete option.. Superblock lists few blocks but that's it.


    Then, I did below using mke2fs and it lists overall superblocks backups:


    Code
    root@signtrigger-nas-prod:/# mke2fs -n /dev/md0
    mke2fs 1.43.4 (31-Jan-2017)
    Creating filesystem with 976721616 4k blocks and 244187136 inodes
    Filesystem UUID: 937674a5-c04c-496e-828b-f8b737aa37a2
    Superblock backups stored on blocks:        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,        102400000, 214990848, 512000000, 550731776, 644972544


    From here, not too sure what to do.

    (I tried fsck.ext4 -p -b 32748 -B 4096 /dev/md0 as well with few superblock numbers but no change)


    Any further idea what to do?

    • Offizieller Beitrag

    I tried to recover files from another USB stick and it worked

    Well at least you have confirmation that it does what it says on the tin

    Any further idea what to do

    Honestly, no, but you could try a search testdisk raid 1 recovery linux it throws a number of threads from their forum.

    None of the given options lead to any undelete option.. Superblock lists few blocks but that's it

    :?: using the guide from the testdisk site did you run analyse on the array and/or on a single drive within the array

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!