ZFS directories missing after reboot

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • ZFS directories missing after reboot

      I rebooted my machine today. When it was back up and I ssh'ed in I was really shocked to see most of the directories in the ZFS pool were gone. To my surprise zpool status didn't report any errors.

      Source Code

      1. (general) root@vault:/# zpool status
      2. pool: pool
      3. state: ONLINE
      4. scan: scrub repaired 0 in 7h2m with 0 errors on Sun Sep 10 07:26:13 2017
      5. config:
      6. NAME STATE READ WRITE CKSUM
      7. pool ONLINE 0 0 0
      8. raidz1-0 ONLINE 0 0 0
      9. ata-WDC_WD20EARS-00MVWB0_WD-WCAZA5343082 ONLINE 0 0 0
      10. ata-Hitachi_HDS5C3020ALA632_ML4230F33DSZXK ONLINE 0 0 0
      11. ata-Hitachi_HDS5C3020ALA632_ML4230F33SEWDK ONLINE 0 0 0
      12. errors: No known data errors
      13. (general) root@vault:/#
      14. <EDIT: additional info>
      15. Pool details (zpool get all):
      16. NAME PROPERTY VALUE SOURCE
      17. pool size 5.44T -
      18. pool capacity 80% -
      19. pool altroot - default
      20. pool health ONLINE -
      21. pool guid 9235485620787079058 default
      22. pool version - default
      23. pool bootfs - default
      24. pool delegation on default
      25. pool autoreplace off default
      26. pool cachefile - default
      27. pool failmode wait default
      28. pool listsnapshots off default
      29. pool autoexpand off default
      30. pool dedupditto 0 default
      31. pool dedupratio 1.00x -
      32. pool free 1.07T -
      33. pool allocated 4.37T -
      34. pool readonly off -
      35. pool ashift 0 default
      36. pool comment - default
      37. pool expandsize - -
      38. pool freeing 0 default
      39. pool fragmentation 35% -
      40. pool leaked 0 default
      41. pool feature@async_destroy enabled local
      42. pool feature@empty_bpobj enabled local
      43. pool feature@lz4_compress active local
      44. pool feature@spacemap_histogram active local
      45. pool feature@enabled_txg active local
      46. pool feature@hole_birth active local
      47. pool feature@extensible_dataset enabled local
      48. pool feature@embedded_data active local
      49. pool feature@bookmarks enabled local
      50. pool feature@filesystem_limits enabled local
      51. pool feature@large_blocks enabled local
      52. Pool filesystem details (zfs get all):
      53. NAME PROPERTY VALUE SOURCE
      54. pool type filesystem -
      55. pool creation Mon May 8 22:38 2017 -
      56. pool used 2.91T -
      57. pool available 612G -
      58. pool referenced 2.91T -
      59. pool compressratio 1.00x -
      60. pool mounted no -
      61. pool quota none default
      62. pool reservation none default
      63. pool recordsize 128K default
      64. pool mountpoint /pool default
      65. pool sharenfs off default
      66. pool checksum on default
      67. pool compression off default
      68. pool atime on default
      69. pool devices on default
      70. pool exec on default
      71. pool setuid on default
      72. pool readonly off default
      73. pool zoned off default
      74. pool snapdir hidden default
      75. pool aclinherit restricted default
      76. pool canmount on default
      77. pool xattr on default
      78. pool copies 1 default
      79. pool version 5 -
      80. pool utf8only off -
      81. pool normalization none -
      82. pool casesensitivity sensitive -
      83. pool vscan off default
      84. pool nbmand off default
      85. pool sharesmb off default
      86. pool refquota none default
      87. pool refreservation none default
      88. pool primarycache all default
      89. pool secondarycache all default
      90. pool usedbysnapshots 0 -
      91. pool usedbydataset 2.91T -
      92. pool usedbychildren 26.6M -
      93. pool usedbyrefreservation 0 -
      94. pool logbias latency default
      95. pool dedup off default
      96. pool mlslabel none default
      97. pool sync standard default
      98. pool refcompressratio 1.00x -
      99. pool written 2.91T -
      100. pool logicalused 2.91T -
      101. pool logicalreferenced 2.91T -
      102. pool filesystem_limit none default
      103. pool snapshot_limit none default
      104. pool filesystem_count none default
      105. pool snapshot_count none default
      106. pool snapdev hidden default
      107. pool acltype off default
      108. pool context none default
      109. pool fscontext none default
      110. pool defcontext none default
      111. pool rootcontext none default
      112. pool relatime off default
      113. pool redundant_metadata all default
      114. pool overlay off default
      Display All


      Now the web interface still reports the correct amount of data in the ZFS pool (2.91 TB used, 612 GB free) but the actual file structure appears to be empty

      Source Code

      1. (general) root@vault:/# du -h /pool | tail -1
      2. 32K /pool
      Really need some help please! I don't want to screw this up, obviously there's some important data on these drives.

      The post was edited 1 time, last by disrupted ().

    • Not sure if you're running into an issue that I've been having which sounds like what you're seeing.

      So on my pool (DataTank) I have a dataset called "ESXI". This share is used by my ESXI homelab server (via NFS) to dump back up files etc.

      Whenever I reboot my OMV, this particular data set always fails to mount. The remaining 4 data sets in the pool mount just fine just not this one. When I do a "zfs status DataTank" it all seems to be "normal", but in the GUI the pool doesn't show up. It also means that it seems like all the files in that data set are missing/deleted.

      What I found is that for some reason I have to go in and manually (via ssh) delete an empty folder where the mount usually takes place.

      As an example, my data sets look like this when I SSH into OMV:

      - /DataTank/ESXI/ESXI
      - /DataTank/MyUsers/MyUsers
      - /DataTank/TimeMachine/TimeMachine
      ...etc..

      So I have /DataTank/ESXI/ESXI folder/mount location. The "2nd" ESXI is the sub-folder/mount point I delete. Then after its gone ZFS will properly mount that data set and I can then see/accesss the NFS share as expected.

      What's really strange is all the other 4 data sets follow the same "pattern" with the folder structures/mount points but don't seem to have this issue with mounting.

      My guess is that something is not getting "cleanly" shutdown on that particular mount/data set and there's some issue with it when booting back up.

      Not sure if this helps in any way but maybe...