Fixed the File System (RAID1) "Missing" problem after Upgrade to 4.1.17

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Fixed the File System (RAID1) "Missing" problem after Upgrade to 4.1.17

      Today, I did the upgrade from 3.0.99 to latest 4.1.17. The whole process worked smoothly, but my raid1 was missing from the file system after upgrading. After searching through the whole forum, I couldn't find a solution.
      But, finally I figured it out and made it work.

      if you have the same problem and when you execute command

      Shell-Script

      1. sudo blkid -p -o full /dev/md0



      and if you get
      /dev/md0: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details)


      you should run command (don't worry, this command will not harm any of your data)

      Shell-Script

      1. sudo wipefs /dev/md0


      if there are more than one offset looks like
      offset type
      ----------------------------------------------------------------
      0x3a37977f000 zfs_member [filesystem]

      0x438 ext4 [filesystem]

      you may created some other file systems on these disks (ZFS eg.)

      for me, the second ( offset with 0x438 ) is the correct information I need. , so run command

      Shell-Script

      1. sudo wipefs --offset 0x438 --force --backup /dev/md0
      this command will delete the correct meta information and create a backup for it. (locates in $(HOME) or /root, with filename wipefs-md0-0x00000438.bak)
      then, erase all signature by running command

      Shell-Script

      1. sudo wipefs --all --force --backup /dev/md0
      always remember to create a backup for the erased signatures.


      finally, I restore the correct signature to the device with command

      Shell-Script

      1. sudo dd if=/root/wipefs-md0-0x00000438.bak of=/dev/md0 seek=$((0x00000438)) bs=1 conv=notrunc
      notice the seeking offset, which can be found from the name of the backup.



      my raid1 storage comes back finally. ^^ :D
    • shannonc wrote:

      After searching through the whole forum, I couldn't find a solution.
      There are a couple of threads that had some of it but not the part about backing it up, wiping everything, and restoring the stuff you need. Did you ever try just wiping the zfs signature with: wipefs -o 0x3a37977f000
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      shannonc wrote:

      After searching through the whole forum, I couldn't find a solution.
      There are a couple of threads that had some of it but not the part about backing it up, wiping everything, and restoring the stuff you need. Did you ever try just wiping the zfs signature with: wipefs -o 0x3a37977f000

      actually there are lots of entries of zfs
      after running sudo wipefs --all--force--backup /dev/md0, I got:

      /dev/md0: 8 bytes were erased at offset 0x3a37977d000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a37977c000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a37977b000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a37977a000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a379779000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a379778000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a379777000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a379776000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a379775000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797bf000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797be000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797bd000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797bc000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797bb000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797ba000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797b9000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797b8000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797b7000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797b6000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797b5000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797b4000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797b1000 (zfs_member): 0c b1 ba 00 00 00 00 00
      /dev/md0: 8 bytes were erased at offset 0x3a3797b0000 (zfs_member): 0c b1 ba 00 00 00 00 00
    • shannonc wrote:

      actually there are lots of entries of zfs
      We saw that in one of the threads as well. I think if you wipefs -o each of the offsets, it would fix it. You would just have to get the next offset with wipefs -n after each wipefs -o. No idea why wipefs -n doesn't list all of the zfs offsets. Either way, I think it might work. I just was curious if you tried that method and it didn't work.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!