Filesystem Missing After RAID 10 (1) Disk Failure USB3 Raid QNAP TR-004

  • Dear OMV Community


    I am a starter when it comes to running servers and software suits like openmediavault.

    After running my rasberryPi 4 server with OMV 5.10.103-v7l+ for arround 1 year connected with my qnap tr-004 storage over USB3 ( Hardware RAID 10 - 4 Disk of 4 TB each)

    one disk failed a few days ago. Because this was my first failure ever I directly bought a new disk and switched it with the faulty one.


    One main problem I forgot was doing the rebuild with openmediavault after reading alot of posts here now. Instead I connected the storage box to my windows device, and let the qnap software rebuild the raid (all disks where green after the rebuild). The next thing I did, I hooked it back up to the openmediavault server and now the web gui says I got a missing file system. I know if I had knewn more about it I would certainly have followed the steps on the forum here first but

    I thought fixing the raid problem with the software that came with the storagebox was primarly.


    Therefor I now ask for your help to solve this issue if it is still posible.


    The commands I first run for diagnostic data are the next ones


    Command: cat /proc/mdstat


    root@openmediavault:~# cat /proc/mdstat

    cat: /proc/mdstat: No such file or directory


    Command: lsblk


    root@openmediavault:~# lsblk

    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

    sda 8:0 0 465.8G 0 disk

    ├─sda1 8:1 0 256M 0 part /boot

    └─sda2 8:2 0 465.5G 0 part /

    sdb 8:16 0 7.3T 0 disk


    Command: grepdisk


    root@openmediavault:~# fdisk -l | grep "Disk "

    The primary GPT table is corrupt, but the backup appears OK, so that will be used.

    Disk /dev/ram0: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram1: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram2: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram3: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram4: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram5: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram6: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram7: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram8: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram9: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram10: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram11: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram12: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram13: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram14: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/ram15: 4 MiB, 4194304 bytes, 8192 sectors

    Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors

    Disk model: Forty

    Disk identifier: 0x1e168b04

    Disk /dev/sdb: 7.3 TiB, 8001456963584 bytes, 15627845632 sectors

    Disk model: TR-004 DISK00

    Disk identifier: 0E5DD83B-7627-47B6-B69E-DD4C0163A147


    Command: cat /etc/mdadm/mdadm.conf


    root@openmediavault:~# cat /etc/mdadm/mdadm.conf

    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.


    # mdadm.conf

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.

    # alternatively, specify devices to scan, using wildcards if desired.

    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.

    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is

    # used if no RAID devices are configured.

    DEVICE partitions


    # auto-create devices with Debian standard permissions

    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>


    # definitions of existing MD arrays



    What I found so far on the forum is that in most cases cat: /proc/mdstat should output something

    to continue with but for me it can't find anything


    With the fdisk -l | grep "Disk " I get the message: "The primary GPT table is corrupt, but the backup appears OK, so that will be used. "


    Does this means there is still hope for a solution?


    The screenshots I included are the one with the missing filesystem from de OMV Web GUI and my storage units.

    The raid management screen is empty.


    If someone know how to fix this problem and help me through this

    very much appreciated.


    If you need more information I can provide it


    Kind Regards

    ShadowVault



  • KM0201

    Hat das Thema freigeschaltet.
  • Hi ananas


    I did the raid rebuild with there software because the box had red led lights. After rebuilding the raid 10 for 1 night all lights where green and it gave a status "RAID succesfull rebuild"

    The problem I am having is that openmediavault can't find the filesystem after connecting the disk back to it.


    In the picture here it says missing



    Kind regards

    ShadowVault

  • Output for blkid


    /dev/sda1: UUID="1B6C-0A95" TYPE="vfat" PARTUUID="1e168b04-01"

    /dev/sda2: UUID="24b3a7d4-fd60-45a9-a8ef-9b32736d0485" TYPE="ext4" PARTUUID="1e168b04-02"

    /dev/sdb: UUID="d276c02d-2398-4ce3-9642-f37c5220b9c4" TYPE="ext2" PTUUID="0e5dd83b-7627-47b6-b69e-dd4c0163a147" PTTYPE="gpt"


    Output for lsblk


    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

    sda 8:0 0 465.8G 0 disk

    ├─sda1 8:1 0 256M 0 part /boot

    └─sda2 8:2 0 465.5G 0 part /

    sdb 8:16 0 7.3T 0 disk



    With cat/var/syslog I notice also this error's


    Aug 12 19:09:10 openmediavault systemd[1]: var-lib-docker-containers-c95f221c6286fd2d9376d9a1eac41ea9dc2beed6aa20325af8300f6e147387f5-mounts-shm.mount: Succeeded.

    Aug 12 19:09:10 openmediavault systemd[1]: var-lib-docker-overlay2-0fae3e846c9f1756afa39a14f6be4f1977dcb59d17e019488cec41da6fcce293-merged.mount: Succeeded.

    Aug 12 19:09:33 openmediavault monit[6566]: Lookup for '/srv/dev-disk-by-uuid-b453f40b-1604-448a-9cb2-dbf4259940c8' filesystem failed -- not found in /proc/self/mounts

    Aug 12 19:09:33 openmediavault monit[6566]: Filesystem '/srv/dev-disk-by-uuid-b453f40b-1604-448a-9cb2-dbf4259940c8' not mounted

    Aug 12 19:09:33 openmediavault monit[6566]: 'filesystem_srv_dev-disk-by-uuid-b453f40b-1604-448a-9cb2-dbf4259940c8' unable to read filesystem '/srv/dev-disk-by-uuid-b453f40b-1604-448a-9cb2-dbf4259940c8' state

    Aug 12 19:09:33 openmediavault monit[6566]: 'filesystem_srv_dev-disk-by-uuid-b453f40b-1604-448a-9cb2-dbf4259940c8' trying to restart

    Aug 12 19:09:33 openmediavault monit[6566]: 'mountpoint_srv_dev-disk-by-uuid-b453f40b-1604-448a-9cb2-dbf4259940c8' status failed (1) -- /srv/dev-disk-by-uuid-b453f40b-1604-448a-9cb2-dbf4259940c8 is not a mountpoint

  • ShadowVault

    Hat den Titel des Themas von „Filesystem Missing After RAID 10 (1) Disk Failure“ zu „Filesystem Missing After RAID 10 (1) Disk Failure USB3 Raid TR-004“ geändert.
  • The UUID has changed, in post #2 I can see UUID = b453f40b-...... (you can find the complete UUID in /etc/openmediavault/config.xml)

    Now it is "d276c02d-2398-4ce3-9642-f37c5220b9c4"


    You can use "tune2fs" to change the UUID back to the old expected value.

    Read the man page.

  • The UUID has changed, in post #2 I can see UUID = b453f40b-...... (you can find the complete UUID in /etc/openmediavault/config.xml)

    Now it is "d276c02d-2398-4ce3-9642-f37c5220b9c4"


    You can use "tune2fs" to change the UUID back to the old expected value.

    Read the man page.

    I changed de UUID to the old one


    blkid|grep UUID

    /dev/sda1: UUID="1B6C-0A95" TYPE="vfat" PARTUUID="1e168b04-01"

    /dev/sda2: UUID="24b3a7d4-fd60-45a9-a8ef-9b32736d0485" TYPE="ext4" PARTUUID="1e168b04-02"

    /dev/sdb: UUID="b453f40b-1604-448a-9cb2-dbf4259940c8" TYPE="ext2" PTUUID="0e5dd83b-7627-47b6-b69e-dd4c0163a147" PTTYPE="gpt"


    What should I now to for the next step ?

  • The UUID has changed, in post #2 I can see UUID = b453f40b-...... (you can find the complete UUID in /etc/openmediavault/config.xml)

    Now it is "d276c02d-2398-4ce3-9642-f37c5220b9c4"


    You can use "tune2fs" to change the UUID back to the old expected value.

    Read the man page.

    When I try to mount it I get the following error about cleaning ?


    sudo mount /dev/sdb

    mount: /srv/dev-disk-by-uuid-b453f40b-1604-448a-9cb2-dbf4259940c8: mount(2) system call failed: Structure needs cleaning.

  • The UI is giving now this error and the UUID system doesn't show anymore if you compare with the other screenshot


  • after OMV restart still getting error Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mount -v --source '/dev/disk/by-uuid/b453f40b-1604-448a-9cb2-dbf4259940c8' 2>&1' with exit code '32': mount: /srv/dev-disk-by-uuid-b453f40b-1604-448a-9cb2-dbf4259940c8: mount(2) system call failed: Structure needs cleaning.

  • the picture is missing the number in the sdb (1)

    its indeed missing the number


    when i execute

    fdisk -l


    I am getting this


    Disk model: Forty

    Units: sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 33553920 bytes

    Disklabel type: dos

    Disk identifier: 0x1e168b04


    Device Boot Start End Sectors Size Id Type

    /dev/sda1 8192 532479 524288 256M c W95 FAT32 (LBA)

    /dev/sda2 532480 976773167 976240688 465.5G 83 Linux



    The primary GPT table is corrupt, but the backup appears OK, so that will be used.

    Disk /dev/sdb: 7.3 TiB, 8001456963584 bytes, 15627845632 sectors

    Disk model: TR-004 DISK00

    Units: sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 4096 bytes

    I/O size (minimum/optimal): 4096 bytes / 4096 bytes

    Disklabel type: gpt

    Disk identifier: 0E5DD83B-7627-47B6-B69E-DD4C0163A147


    Device Start End Sectors Size Type

    /dev/sdb1 2048 15627845598 15627843551 7.3T Linux filesystem

  • Is this safe ? because in some posts they say it could wipe all files ?

    fsck /dev/sdb


    fsck from util-linux 2.33.1

    e2fsck 1.44.5 (15-Dec-2018)

    ext2fs_check_desc: Corrupt group descriptor: bad block for block bitmap

    fsck.ext4: Group descriptors look bad... trying backup blocks...

    Block bitmap for group 32640 is not in group. (block 244187136)

    Relocate<y>? yes

    Inode bitmap for group 32640 is not in group. (block 1953480443)

    Relocate<y>? yes

    Inode table for group 32640 is not in group. (block 0)

    WARNING: SEVERE DATA LOSS POSSIBLE.


    I stopped at the last line , Should i continue ?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!