Fixed the File System (RAID1) "Missing" problem after Upgrade to 4.1.17

  • Today, I did the upgrade from 3.0.99 to latest 4.1.17. The whole process worked smoothly, but my raid1 was missing from the file system after upgrading. After searching through the whole forum, I couldn't find a solution.
    But, finally I figured it out and made it work.


    if you have the same problem and when you execute command


    Bash
    sudo blkid -p -o full /dev/md0



    and if you get
    /dev/md0: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details)



    you should run command (don't worry, this command will not harm any of your data)


    Bash
    sudo wipefs /dev/md0


    if there are more than one offset looks like
    offset type
    ----------------------------------------------------------------
    0x3a37977f000 zfs_member [filesystem]


    0x438 ext4 [filesystem]


    you may created some other file systems on these disks (ZFS eg.)


    for me, the second ( offset with 0x438 ) is the correct information I need. , so run command


    Bash
    sudo wipefs --offset 0x438 --force  --backup /dev/md0

    this command will delete the correct meta information and create a backup for it. (locates in $(HOME) or /root, with filename wipefs-md0-0x00000438.bak)
    then, erase all signature by running command


    Bash
    sudo wipefs --all  --force  --backup /dev/md0

    always remember to create a backup for the erased signatures.



    finally, I restore the correct signature to the device with command


    Bash
    sudo dd if=/root/wipefs-md0-0x00000438.bak of=/dev/md0 seek=$((0x00000438)) bs=1 conv=notrunc

    notice the seeking offset, which can be found from the name of the backup.




    my raid1 storage comes back finally. ^^:D

    • Offizieller Beitrag

    After searching through the whole forum, I couldn't find a solution.

    There are a couple of threads that had some of it but not the part about backing it up, wiping everything, and restoring the stuff you need. Did you ever try just wiping the zfs signature with: wipefs -o 0x3a37977f000

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • There are a couple of threads that had some of it but not the part about backing it up, wiping everything, and restoring the stuff you need. Did you ever try just wiping the zfs signature with: wipefs -o 0x3a37977f000


    actually there are lots of entries of zfs
    after running sudo wipefs --all--force--backup /dev/md0, I got:


    /dev/md0: 8 bytes were erased at offset 0x3a37977d000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a37977c000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a37977b000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a37977a000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a379779000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a379778000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a379777000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a379776000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a379775000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797bf000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797be000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797bd000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797bc000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797bb000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797ba000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797b9000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797b8000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797b7000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797b6000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797b5000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797b4000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797b1000 (zfs_member): 0c b1 ba 00 00 00 00 00
    /dev/md0: 8 bytes were erased at offset 0x3a3797b0000 (zfs_member): 0c b1 ba 00 00 00 00 00

    • Offizieller Beitrag

    actually there are lots of entries of zfs

    We saw that in one of the threads as well. I think if you wipefs -o each of the offsets, it would fix it. You would just have to get the next offset with wipefs -n after each wipefs -o. No idea why wipefs -n doesn't list all of the zfs offsets. Either way, I think it might work. I just was curious if you tried that method and it didn't work.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!