Destroyed file system?

  • Hello guys,


    I would need some help regarding my filesystem on OMV.


    I'm running a HP microserver with two Drives (sdb & sdc) running in a RAID1 (md0) while the OS is running on a SSD (sda). To copy a *.zip file from an USB Stick (sdd) I played a bit around on the terminal to copy it.
    For this I mounted the stick in /mnt/USB0 and then did sth stupid... I executed "cp /mnt/USB0/file.zip /dev/md0


    Now my filesystem is gone. The RAID although is still ok:


    Version : 1.2
    Creation Time : Sat Feb 25 12:54:54 2017
    Raid Level : raid1
    Array Size : 3906887360 (3725.90 GiB 4000.65 GB)
    Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB)
    Raid Devices : 2
    Total Devices : 2
    Persistence : Superblock is persistent


    Update Time : Sat Jan 13 21:23:48 2018
    State : clean
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0


    Name : NAS:RAID1 (local to host NAS)
    UUID : cbd6add8:ffdcfcbb:73f20e90:b88e0d73
    Events : 118


    Number Major Minor RaidDevice State
    0 8 16 0 active sync /dev/sdb
    1 8 32 1 active sync /dev/sdc


    Can anyone help me out? This is the syslog:


    Jan 13 21:20:40 NAS kernel: [597691.462973] EXT4-fs error (device md0): ext4_map_blocks:503: inode #91750401: block 367009824: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:20:40 NAS kernel: [597691.623272] EXT4-fs error (device md0): ext4_map_blocks:503: inode #91750401: block 367009824: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:20:46 NAS monit[3704]: 'fs_media_a783f4e2-4f3e-4437-8bdd-89f8aef078b8' space usage 100.0% matches resource limit [space usage>80.0%]
    Jan 13 21:20:49 NAS kernel: [597699.955079] EXT4-fs error (device md0): ext4_map_blocks:503: inode #91750401: block 367009824: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:20:54 NAS kernel: [597705.760192] EXT4-fs error (device md0): ext4_map_blocks:503: inode #91750401: block 367009824: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:21:04 NAS kernel: [597715.525762] EXT4-fs error (device md0): ext4_map_blocks:503: inode #91750401: block 367009824: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:21:16 NAS monit[3704]: 'fs_media_a783f4e2-4f3e-4437-8bdd-89f8aef078b8' space usage 100.0% matches resource limit [space usage>80.0%]
    Jan 13 21:21:46 NAS monit[3704]: 'fs_media_a783f4e2-4f3e-4437-8bdd-89f8aef078b8' space usage 100.0% matches resource limit [space usage>80.0%]
    Jan 13 21:22:16 NAS monit[3704]: 'fs_media_a783f4e2-4f3e-4437-8bdd-89f8aef078b8' space usage 100.0% matches resource limit [space usage>80.0%]
    Jan 13 21:22:25 NAS kernel: [597796.393019] EXT4-fs error (device md0): ext4_map_blocks:503: inode #91750401: block 367009824: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:22:25 NAS kernel: [597796.423117] EXT4-fs error (device md0): ext4_map_blocks:503: inode #91750401: block 367009824: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:22:29 NAS kernel: [597800.573910] EXT4-fs error (device md0): ext4_map_blocks:503: inode #91750401: block 367009824: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:22:35 NAS kernel: [597806.529447] EXT4-fs error (device md0): ext4_map_blocks:503: inode #91750401: block 367009824: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:22:37 NAS kernel: [597808.705128] EXT4-fs error (device md0): ext4_map_blocks:503: inode #232652801: block 930619424: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:22:37 NAS kernel: [597808.715176] EXT4-fs error (device md0): ext4_map_blocks:503: inode #232652801: block 930619424: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:22:37 NAS kernel: [597808.745298] EXT4-fs error (device md0): ext4_map_blocks:503: inode #232652801: block 930619424: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:22:40 NAS kernel: [597811.452240] EXT4-fs error (device md0): ext4_map_blocks:503: inode #91750401: block 367009824: comm smbd: lblock 0 mapped to illegal pblock (length 1)
    Jan 13 21:22:46 NAS monit[3704]: 'fs_media_a783f4e2-4f3e-4437-8bdd-89f8aef078b8' space usage 100.0% matches resource limit [space usage>80.0%]





    root@NAS:~# cat /proc/mdstat
    Personalities : [raid1]
    md0 : active (auto-read-only) raid1 sdb[0] sdc[1]
    3906887360 blocks super 1.2 [2/2] [UU]


    unused devices: <none>
    root@NAS:~# blkid
    /dev/sda1: UUID="76ae0e0c-6c71-4f4e-8e59-4177d7e52fb0" TYPE="ext4"
    /dev/sda5: UUID="dda92853-3586-4dd3-a535-efdb93af701c" TYPE="swap"
    /dev/sdc: UUID="cbd6add8-ffdc-fcbb-73f2-0e90b88e0d73" UUID_SUB="eb61a0b9-9e22-04ef-c54a-1114a4993c70" LABEL="NAS:RAID1" TYPE="linux_raid_member"
    /dev/sdb: UUID="cbd6add8-ffdc-fcbb-73f2-0e90b88e0d73" UUID_SUB="22686cdb-39fd-860b-b3b9-b4d16e949603" LABEL="NAS:RAID1" TYPE="linux_raid_member"
    root@NAS:~# cat /proc/mdstat
    Personalities : [raid1]
    md0 : active (auto-read-only) raid1 sdb[0] sdc[1]
    3906887360 blocks super 1.2 [2/2] [UU]



    wuestenfuchs :(

    • Offizieller Beitrag

    I know that, but I refer to this Fennek

    Then you should have used this as your user name :)
    Some names and abbreviations are a no go. Even if I do not know your political orientation.

  • I Think that most people who could help you do not understand that you have chosen this Username although you know that this can be (mis?)leading.


    If you know what of the horrible possible meaning of the name: why have you chosen it.


    How Long did the Copy Run? How big ist the ZIP?

  • +1 for Testdisk. It has recovered data for me in the past that others could not. You may find (if Testdisk allows it) that it would be easier to use Testdisk to copy all the data off one of the drives in your raid - since you mirror, there is no striping. Tell Testdisk to examine one of the drives as it can show you files from previous partitions and allow you to copy it off. Then destroy the whole raid and start again.



    Sent from my iPad using Tapatalk

  • ok, thank you! I will try to find some disks and clone them + run testdisk.


    Just for unterstanding:


    The "md0" blockdevice (sys file) that i have overwritten contains all information regarding the raid1? Because this one is located on sda while my raid contains sdb and sdc. When I run testdisk (from a live-cd / usb-stick) I will just touch the two disks from my raid. ?(

  • ok, thank you! I will try to find some disks and clone them + run testdisk.


    Just for unterstanding:


    The "md0" blockdevice (sys file) that i have overwritten contains all information regarding the raid1? Because this one is located on sda while my raid contains sdb and sdc. When I run testdisk (from a live-cd / usb-stick) I will just touch the two disks from my raid. ?(


    I don’t use mdadm, so don’t know the complete ins and outs of it but to my understanding the data should exist in a ‘normal’ Linux fs format (ext4?) on each drive. md0 is a virtual device. Data written to it is transferred identically to each drive (since this is raid1). The information about md0 is stored on the system drive, sda.


    So in theory:


    1. Get USB drive of sufficient capacity to house your data (or internal, but USB would be easier) - preferably empty as you’d be better off formatting it as ext4 or Btrfs for optimum Linux compatibility.


    2. Run test disk and point it at sdb OR sdc. The contents will be the same if you had raid1.


    3. HOPE that it sees your partition information and allows you to save files to your usb drive, run through the files it sees and verify it’s what you expect. If so, tell test disk the location of the USB drive and get a coffee [or 10].


    ^^^ you may wish to check it can actually see your data prior to acquiring the extra drive.


    4. Unplug USB drive safe with data, destroy md0/sdb/sdc (format and start again).


    5. Plug USB back in and restore data.


    6. Don’t be root for copy commands in future, or don’t hit enter before the command is complete, or have backups... RAID is NOT NOT NOT a backup ;)



    Sent from my iPhone using Tapatalk

  • I removed the RAID and worked with one disk. On this one I could restore the GPT but not the filesystem. With Photorec I could find a lot of files while running the program for some minutes, but of course it's a mess to sort 4TB of data ;)


    After trying some other tools and spending lots of hours in front of the monitor I gave the other disk to a recovery specialist. Let's see if they can recover the filesystem.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!