Pool is DEGRADED

    • OMV 4.x
    • I would wait. What are you using that you can "replug" your drives?

      Are the drives in an older server chassis?

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 4.1.17, ThinkServer TS140, 12GB ECC, 16GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.17, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • i plug the hardware device

      every disk are ok now

      but i don't see the totale size..



      Source Code

      1. Pool status (zpool status):
      2. pool: Pool1
      3. state: ONLINE
      4. status: One or more devices has experienced an unrecoverable error. An
      5. attempt was made to correct the error. Applications are unaffected.
      6. action: Determine if the device needs to be replaced, and clear the errors
      7. using 'zpool clear' or replace the device with 'zpool replace'.
      8. see: http://zfsonlinux.org/msg/ZFS-8000-9P
      9. scan: resilvered 1.71T in 5h52m with 0 errors on Sat Jan 19 20:36:09 2019
      10. config:
      11. NAME STATE READ WRITE CKSUM
      12. Pool1 ONLINE 0 0 0
      13. raidz2-0 ONLINE 0 0 0
      14. ata-ST2000VN000-1HJ164_W520JHC9 ONLINE 0 0 3
      15. ata-ST2000VN000-1HJ164_W520JHGD ONLINE 0 0 0
      16. ata-ST2000VN000-1HJ164_W520JGXY ONLINE 0 0 0
      17. ata-WDC_WD20EFRX-68AX9N0_WD-WMC301013769 ONLINE 0 0 0
      18. ata-WDC_WD20EFRX-68AX9N0_WD-WMC301090495 ONLINE 0 0 0
      19. ata-WDC_WD20EFRX-68AX9N0_WD-WMC301111420 ONLINE 0 0 0
      20. errors: No known data errors
      21. Pool details (zpool get all):
      22. NAME PROPERTY VALUE SOURCE
      23. Pool1 size 10.9T -
      24. Pool1 capacity 86% -
      25. Pool1 altroot - default
      26. Pool1 health ONLINE -
      27. Pool1 guid 6052818310017031301 -
      28. Pool1 version - default
      29. Pool1 bootfs - default
      30. Pool1 delegation on default
      31. Pool1 autoreplace off default
      32. Pool1 cachefile - default
      33. Pool1 failmode wait default
      34. Pool1 listsnapshots off default
      35. Pool1 autoexpand off default
      36. Pool1 dedupditto 0 default
      37. Pool1 dedupratio 1.00x -
      38. Pool1 free 1.44T -
      39. Pool1 allocated 9.43T -
      40. Pool1 readonly off -
      41. Pool1 ashift 0 default
      42. Pool1 comment - default
      43. Pool1 expandsize - -
      44. Pool1 freeing 0 -
      45. Pool1 fragmentation 33% -
      46. Pool1 leaked 0 -
      47. Pool1 multihost off default
      48. Pool1 feature@async_destroy enabled local
      49. Pool1 feature@empty_bpobj enabled local
      50. Pool1 feature@lz4_compress active local
      51. Pool1 feature@multi_vdev_crash_dump disabled local
      52. Pool1 feature@spacemap_histogram active local
      53. Pool1 feature@enabled_txg active local
      54. Pool1 feature@hole_birth active local
      55. Pool1 feature@extensible_dataset enabled local
      56. Pool1 feature@embedded_data active local
      57. Pool1 feature@bookmarks enabled local
      58. Pool1 feature@filesystem_limits disabled local
      59. Pool1 feature@large_blocks disabled local
      60. Pool1 feature@large_dnode disabled local
      61. Pool1 feature@sha512 disabled local
      62. Pool1 feature@skein disabled local
      63. Pool1 feature@edonr disabled local
      64. Pool1 feature@userobj_accounting disabled local
      65. Pool filesystem details (zfs get all):
      66. NAME PROPERTY VALUE SOURCE
      67. Pool1 type filesystem -
      68. Pool1 creation Wed Jul 1 11:16 2015 -
      69. Pool1 used 6.28T -
      70. Pool1 available 752G -
      71. Pool1 referenced 6.28T -
      72. Pool1 compressratio 1.00x -
      73. Pool1 mounted yes -
      74. Pool1 quota none default
      75. Pool1 reservation none default
      76. Pool1 recordsize 128K default
      77. Pool1 mountpoint /mnt local
      78. Pool1 sharenfs off default
      79. Pool1 checksum on default
      80. Pool1 compression off default
      81. Pool1 atime on default
      82. Pool1 devices on default
      83. Pool1 exec on default
      84. Pool1 setuid on default
      85. Pool1 readonly off default
      86. Pool1 zoned off default
      87. Pool1 snapdir hidden default
      88. Pool1 aclinherit restricted default
      89. Pool1 createtxg 1 -
      90. Pool1 canmount on default
      91. Pool1 xattr on default
      92. Pool1 copies 1 default
      93. Pool1 version 5 -
      94. Pool1 utf8only off -
      95. Pool1 normalization none -
      96. Pool1 casesensitivity sensitive -
      97. Pool1 vscan off default
      98. Pool1 nbmand off default
      99. Pool1 sharesmb off default
      100. Pool1 refquota none default
      101. Pool1 refreservation none default
      102. Pool1 guid 6341434178132475291 -
      103. Pool1 primarycache all default
      104. Pool1 secondarycache all default
      105. Pool1 usedbysnapshots 0B -
      106. Pool1 usedbydataset 6.28T -
      107. Pool1 usedbychildren 49.9M -
      108. Pool1 usedbyrefreservation 0B -
      109. Pool1 logbias latency default
      110. Pool1 dedup off default
      111. Pool1 mlslabel none default
      112. Pool1 sync standard default
      113. Pool1 dnodesize legacy default
      114. Pool1 refcompressratio 1.00x -
      115. Pool1 written 6.28T -
      116. Pool1 logicalused 6.28T -
      117. Pool1 logicalreferenced 6.28T -
      118. Pool1 volmode default default
      119. Pool1 filesystem_limit none default
      120. Pool1 snapshot_limit none default
      121. Pool1 filesystem_count none default
      122. Pool1 snapshot_count none default
      123. Pool1 snapdev hidden default
      124. Pool1 acltype off default
      125. Pool1 context none default
      126. Pool1 fscontext none default
      127. Pool1 defcontext none default
      128. Pool1 rootcontext none default
      129. Pool1 relatime off default
      130. Pool1 redundant_metadata all default
      131. Pool1 overlay off default
      132. Pool1 omvzfsplugin:uuid 2c202288-a1c5-4d23-a42e-deaa04f2b5b2 local
      Display All

      how can i have the total size really available of my pool ?

      thanks

      The post was edited 1 time, last by .H. ().

    • .H. wrote:

      but i don't see the totale size..

      .H. wrote:

      errors: No known data errors

      Pool details (zpool get all):

      NAME PROPERTY VALUE SOURCE
      Pool1 size 10.9T -

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 4.1.17, ThinkServer TS140, 12GB ECC, 16GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.17, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • I'm running a ZFS mirror, not RAIDZ. (So I don't have the parity disk subtraction.)

      Did you have more space before the disk resilver?

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 4.1.17, ThinkServer TS140, 12GB ECC, 16GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.17, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • I believe @hoppel118 is running a RAIDZ, ZFS array. Maybe he'll look at this.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 4.1.17, ThinkServer TS140, 12GB ECC, 16GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.17, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • flmaxey wrote:

      I believe @hoppel118 is running a RAIDZ,
      My too :) I an running a Raid Z1 array.

      The pool size information from "zpool get all" is the size where the parity disk(s) are not substracted.
      You can try zfs list. It outputs the used and available space of the pool, which should fit to the information of the ZFS plugin of OMV.

      You can calculate it here: ZFS / RAIDZ Capacity Calculator
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • Don't see the 10T.

      I have 6*2T

      On raidz2 , 7T are too small ..



      but when i saw the ZFS calculator..

      7T is the max size .??
      Images
      • IMG_20190120_150149__01.jpg

        793.35 kB, 2,304×543, viewed 24 times
      • Capture d’écran 2019-01-20 à 15.28.01.png

        81.12 kB, 892×489, viewed 25 times

      The post was edited 1 time, last by .H. ().

    • You have 6 drives with 1.82T (roughly). You have to subtract 2 of those drives for parity (RAIDZ2)

      That leaves 4 drives at 1.82 = 7.28T in raw storage. A bit more must be subtracted out for ZFS overhead, checksums, etc. 7T is about right.


      .H. wrote:

      Pool1 used 6.28T -
      Pool1 available 752G

      You appear to have 6.28T of data on the array, with 752G remaining

      7T (capacity) - 6.28T (data) = 720GB (remaining empty space)
      This is very close to what you have.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 4.1.17, ThinkServer TS140, 12GB ECC, 16GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.17, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • Pool is DEGRADED

      flmaxey wrote:

      I believe @hoppel118 is running a RAIDZ, ZFS array. Maybe he'll look at this.


      Ok, so it’s solved now. Great, thanks @flmaxey! ;)
      ---------------------------------------------------------------------------------------------------------------
      frontend software - tvos | android tv | libreelec | win10 | kodi krypton
      frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
      -------------------------------------------
      backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------