Upgrade to Arrakis "killed" my data disk...

    • OMV 4.x
    • Resolved
    • Upgrade 3.x -> 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Upgrade to Arrakis "killed" my data disk...

      Hi,

      no, the data disk is not killed, the data is there and can be manually mounted. This as a foreword...

      Upgrade did fine so far, but there are strange things happening with the data drive. It is found under disks, fine:


      But looking at file systems shows...


      Disk usage is complete empty:

      Completely missing here:

      Source Code

      1. root@omv:~# blkid
      2. /dev/sdb1: UUID="47ab5528-bac0-4715-a145-4860c4333093" TYPE="ext4" PARTUUID="0003ab15-01"
      3. /dev/sdb5: UUID="d33f1c6e-d56c-4f70-9161-1fe359b59419" TYPE="swap" PARTUUID="0003ab15-05"



      Things that may be helpful:

      Source Code

      1. root@omv:~# cat /etc/fstab
      2. # /etc/fstab: static file system information.
      3. #
      4. # Use 'blkid' to print the universally unique identifier for a
      5. # device; this may be used with UUID= as a more robust way to name devices
      6. # that works even if disks are added and removed. See fstab(5).
      7. #
      8. # <file system> <mount point> <type> <options> <dump> <pass>
      9. # / was on /dev/sda1 during installation
      10. UUID=47ab5528-bac0-4715-a145-4860c4333093 / ext4 errors=remount-ro 0 1
      11. # swap was on /dev/sda5 during installation
      12. UUID=d33f1c6e-d56c-4f70-9161-1fe359b59419 none swap sw 0 0
      13. /dev/sdc1 /media/usb0 auto rw,user,noauto 0 0
      14. # >>> [openmediavault]
      15. # UUID=4a255c48-12f0-43da-8e6e-c0f650dfe8b7 /media/4a255c48-12f0-43da-8e6e-c0f650dfe8b7 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      16. # <<< [openmediavault]
      17. tmpfs /tmp tmpfs defaults 0 0
      Display All


      Source Code

      1. root@omv:~# mount
      2. sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
      3. proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
      4. udev on /dev type devtmpfs (rw,nosuid,relatime,size=1470096k,nr_inodes=191142,mode=755)
      5. devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
      6. tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=296820k,mode=755)
      7. /dev/sdb1 on / type ext4 (rw,relatime,errors=remount-ro)
      8. securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
      9. tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
      10. tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
      11. tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
      12. cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
      13. pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
      14. cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
      15. cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
      16. cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
      17. cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
      18. cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
      19. cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
      20. cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
      21. cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
      22. cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
      23. cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
      24. systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12497)
      25. hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
      26. debugfs on /sys/kernel/debug type debugfs (rw,relatime)
      27. mqueue on /dev/mqueue type mqueue (rw,relatime)
      28. sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
      29. nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
      30. tmpfs on /tmp type tmpfs (rw,relatime)
      Display All

      Anything else needed?
      Can please someone help me outta this? Thanks in advance...

      Edit: And I receive mails like:

      Source Code

      1. Status failed Service mountpoint_media_4a255c48-12f0-43da-8e6e-c0f650dfe8b7
      2. Date: Tue, 16 Apr 2019 08:57:43
      3. Action: alert
      4. Host: omv
      5. Description: status failed (1) -- /media/4a255c48-12f0-43da-8e6e-c0f650dfe8b7 is not a mountpoint
      6. Your faithful employee,
      7. Monit
      --
      Get a Rose Tattoo...

      HP t5740 with Expansion and USB3, Inateck Case w/ 3TB WD-Green
      OMV 4.1.22-1 Arrakis i386|4.19.0-0.bpo.4-686-pae
    • This is a known issue. It seems that there is some issue with the RAID or filesystem metadata created on Debian 8 (maybe 7, too) which is now not correctly read in Debian 9, so blkid can not detect the RAID incl. the filesystem. The problem is that this does not happen always, this means there is some relation between hardware and/or the OS/kernel OMV3 was running on when the RAID was created.

      I was not able to reproduce this behaviour on test machines and my production system.

      Other people that were affected by this copied the data to a second disk and recreated the RAID from scratch.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Hi Volker,

      thanks for the quick answer. Interesting, I never created a Raid. I did some investigation and found that

      Source Code

      1. root@omv:~# file -s /dev/sda1
      2. /dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=4a255c48-12f0-43da-8e6e-c0f650dfe8b7, volume name "Gummiplatte" (extents) (large files) (huge files)

      finds all that is needed, I think...

      But

      Source Code

      1. tune2fs /dev/sda1 -U 4a255c48-12f0-43da-8e6e-c0f650dfe8b7

      seems to do something, but blkid does not show anything afterwards...

      I will copy the disk at first and go to play around; there must be an easier way, I think
      --
      Get a Rose Tattoo...

      HP t5740 with Expansion and USB3, Inateck Case w/ 3TB WD-Green
      OMV 4.1.22-1 Arrakis i386|4.19.0-0.bpo.4-686-pae
    • New

      ananas, looks promising (Partition is shown as ZFS-Member in wipefs).
      Can´t go further yet, still copying data from the drive...

      Question in advance: where and how to download util-linux-2.32?
      Will it not interfere with the installed wipefs?

      Tia
      --
      Get a Rose Tattoo...

      HP t5740 with Expansion and USB3, Inateck Case w/ 3TB WD-Green
      OMV 4.1.22-1 Arrakis i386|4.19.0-0.bpo.4-686-pae
    • New

      If I remember correct ...
      You need to have "build-essential" installed.

      util-linux is available here: mirrors.edge.kernel.org/pub/linux/utils/util-linux/
      Download, extract, ./configure, make but do not "make install"
      It was running from the directory where I had compiled it.

      But as other users had reported, the wipefs that ships with debian seem to have done the trick too.
      My issue with the "onboard" wipefs was, that it did just report ONE zfs signature whereas the one that I had compiled
      showed all zfs signatures.
      Maybe you can get along with version 2.29.2
      anyway, do "man wipefs" and read carefully !

      repeat
      1. "wipefs -n" to list signatures
      2. "wipefs -o <offset reported by wipefs in step 1> -t zfs" to get rid of ONE zfs signature
      until there are no more zfs signatures listed

      My filesystem got mounted after deleting the last zfs signature.

      Good luck,
      Thomas
    • New

      ananas, you´re the man!

      After copying has finished, I unmounted the partition and tried with the onboard wipefs.
      Kinda work, it showed 30 (!) different offsets. Deleted them all and anything is back again, including disk usage.

      Thanks so much!

      Alas, this should be pinned for everyone who is in this trouble...
      --
      Get a Rose Tattoo...

      HP t5740 with Expansion and USB3, Inateck Case w/ 3TB WD-Green
      OMV 4.1.22-1 Arrakis i386|4.19.0-0.bpo.4-686-pae
    • New

      Dropkick Murphy wrote:

      this should be pinned for everyone who is in this trouble...
      I don't recommend this for most users. Not only do you install a newer version not intended for your system but now it won't get updates. This could be easily scripted to work with existing version of wipefs.

      Dropkick Murphy wrote:

      Kinda work, it showed 30 (!) different offsets. Deleted them all and anything is back again, including disk usage.
      Do you have the output when it showed the 30 offsets? That would let me write a one liner to wipe everything instead of updating util-linux.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      ryecoaaron wrote:

      Dropkick Murphy wrote:

      this should be pinned for everyone who is in this trouble...
      I don't recommend this for most users. Not only do you install a newer version not intended for your system but now it won't get updates. This could be easily scripted to work with existing version of wipefs.

      Dropkick Murphy wrote:

      Kinda work, it showed 30 (!) different offsets. Deleted them all and anything is back again, including disk usage.
      Do you have the output when it showed the 30 offsets? That would let me write a one liner to wipe everything instead of updating util-linux.
      No, I didn´t install anything...


      It didn´t show the 30 offsets at a time. It shows them one by one. You have to copy each Hex offset seperately reported by

      Source Code

      1. wipefs -n

      into

      Source Code

      1. wipefs -o <offset reported by wipefs in step 1> -t zfs

      until no more ZFS offsets are shown. Thats all. Nothing to install 8o
      --
      Get a Rose Tattoo...

      HP t5740 with Expansion and USB3, Inateck Case w/ 3TB WD-Green
      OMV 4.1.22-1 Arrakis i386|4.19.0-0.bpo.4-686-pae
    • New

      Dropkick Murphy wrote:

      What the hell is going on, #9 is under moderation? Why?
      Post is kind of long, quite a few quotes, and edited. The filter is touchy.

      henfri wrote:

      Interfere, even if he did not execute
      make install ?
      Some people probably will execute make install.
      make install just moves it to a directory in your path. Doesn't mean it won't be used.
      util-linux is more than just wipefs.
      If you build it, you will probably use it more than once but never update it.
      It is untested.
      Avoids filing a bug report.
      No need since the installed wipefs will work. Just needs to be run more than once.

      One liner to do that (run as root not sudo) - will wipe the entire disk:

      disk="/dev/sdX"; while wipefs -n ${disk} | grep ^0x; do echo "wipe"; wipefs -a ${disk}; done
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!

      The post was edited 1 time, last by ryecoaaron ().

    • New

      ananas wrote:

      That is no good idea or just a typo ?
      As you can see in my one liner, it is using the -a flag. If you are just trying to clear some signatures and keep data, the -a flag is bad.

      There is a "bug" in wipefs in util-linux 2.29 that only "sees" one signature at a time. So, wipefs -a only wipes the one it sees. If you run it again, it will find another one and will be able to wipe that. I haven't been able to create a disk with more than one signature. So, I'm not quite sure what causes this but it is fixed in 2.32+.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!