[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • OMV 1.0

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • ryecoaaron wrote:

      Charlo wrote:

      The problem should be that the dpkg-dev package is not a dependency of the zfs-dpkg package in the debian repositories.
      The command fails even if the package is installed. Therefore, adding it as a dependency would not help.
      By previously installing only the "dpkg-dev" package, on a clean OMV install, updated to the backports 4.16 kernel in a vm, I can install the "openmediavault-zfs" package without the error of the lack of "dpkg-architecture".

      The installation is performed with the zfs 0.7.9 packages just released from the debian backports repositories.

      The OMV-Extras.org Testing repository is disabled.
      The only additional repository I have activated is the debian contrib which does not affect the zfs installation anyway.

      Successive compilation errors (lack of kmod spl devel) remain but the process still terminates with working zfs.

      @ryecoaaron: I'm wondering when it will be possible to release version 4.0.3 of openmediavault-zfs which is now in testing.
      Thank you

      I attach the command line log.
      ZFS-install-cmd-log.txt
    • Charlo wrote:

      By previously installing only the "dpkg-dev" package, on a clean OMV install, updated to the backports 4.16 kernel in a vm, I can install the "openmediavault-zfs" package without the error of the lack of "dpkg-architecture".

      The installation is performed with the zfs 0.7.9 packages just released from the debian backports repositories.

      The OMV-Extras.org Testing repository is disabled.
      The only additional repository I have activated is the debian contrib which does not affect the zfs installation anyway.

      Successive compilation errors (lack of kmod spl devel) remain but the process still terminates with working zfs.
      This is kicking a dead horse. The "error" never caused any problem. So, I really wasn't concerned. You aren't seeing it now because you are using the 0.7.9 packages from backports instead of the packages I built. So, the issue was probably fixed in those packages.

      Charlo wrote:

      I'm wondering when it will be possible to release version 4.0.3 of openmediavault-zfs which is now in testing.
      Why not install it from the testing repo? If I pushed 4.0.3 to the regular repo, it would be the exact same package. That said, I will probably push 4.0.4 today (minor dependencies changes to accommodate the proxmox kernel) to the regular repo.
      omv 4.1.8.2 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      Why not install it from the testing repo? If I pushed 4.0.3 to the regular repo, it would be the exact same package. That said, I will probably push 4.0.4 today (minor dependencies changes to accommodate the proxmox kernel) to the regular repo.
      You do a fantastic job maintaining testing repository before releasing the final versions but about the zfs packages, I prefer to use the packages from the debian backports that after passing from the "unstable" and "testing" phases give me more peace of mind.

      Obviously I know that this requires having to hold the kernel and headers to the current version until new versions of zfs are supporting the next.

      --

      Regarding new features of the extras and in particular that to activate and deactivate the backports, I wanted to point out that I think that probably there is a bug in their activation function.

      I think the variable "OMV_APT_USE_KERNEL_BACKPORTS =" YES " that you insert in the file "/etc/default/openmediavault" is misinterpreted by the script "/usr/share/openmediavault/mkconf/apt.d/15omvextras" probably by the implementation of "omv_checkyesno" function.

      In the presence of the "YES" value, the script deletes your backports configurations from apt (i think only in "/etc/apt/preferences.d/omv-extras-org" file) rather than creating them.
      The "NO" value instead re-creates them.
      So I think the operation is reversed.

      Going to delete the variable and manually re-executing "omv-mkconf apt" the configurations are recreated.

      I do not want to go further because I think I'm offtopic in this thread, even if we open an another one.

      Sorry for may bad english.

      Thanks.
      Carlo

      The post was edited 5 times, last by Charlo ().

    • Hi guys,

      as already described, I am replacing my 8x 4TB wd red disks with 8x 10TB wd red disks of my pool one after the other. Now I see some "issues":

      1. Section "Storage - ZFS" in omv webui is not working after some hours of resilvering.

      This doesn't have to do with zfs configuration changes while resilvering, how I suggested before. This is a general behavior and happens every time I replace one of my disks. Not a big issue, because everthing works as expected, exxecpt the section in the omv webui.

      hoppel118 wrote:

      But today I have a problem with zfs plugin. If I go to "Storage - ZFS" I only see "loading" which ends with "communication failure". Have a look at the following Screenshots:





      Round about a minute later I get the following message:




      If I click "ok", I see nothing:




      But my pool is still reachable by smb and I can see all my zfs file systems under "storage - file systems" in the omv webui.

      2. ZFS ZED is not activated after the update to the latest zfs plugin and zfs debian packages.

      This happend sometimes in the past. But I ignored it and corrected manually. Maybe it's possible to keep the zed notification setting? I don't like the behavior that I have to look for the activation after an update of the plugin or the debian packages. I trust my config and maybe forget to activate zed notification after an update. In this case I won't get informed per email if any error happens to my pool and maybe worst case scenario happens...




      3. ZED doesn't inform me that my pool is in the "DEGRADED" state.

      I replaced 4 of my 8 4TB disks with 10TB disks and my omv machine always informs me with an email about the finished resilver procedure. But my omv machine never informed me about the "DEGRADED" state of my pool, which is much more relevant to me in daily use. I have to replace one disk by the other. There are no free sata/sas ports.

      @subzero79 and @ryecoaaron What do you think about this?

      Thanks and regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - debian | kernel 4.4 lts | proxmox | openmediavault | zfs raid-z2 | docker | emby | vdr | vnsi | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • Hi guys,

      today I am resilvering my sixth disk. Checked the issue that the "section "Storage - ZFS" in omv webui is not working after some hours of resilvering". It's not hours. After 7 minutes of resilvering it's already not reachable.

      I use latest openmediavault 4.1.7, backports kernel 4.16, openmediavault-zfs 4.0.3, zfs-0.7.9-3.


      Maybe someone can verify, if it is a general behavior or if it's related to my setup?

      Thanks and regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - debian | kernel 4.4 lts | proxmox | openmediavault | zfs raid-z2 | docker | emby | vdr | vnsi | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • Regarding #3: It's not really a fix to your problem, but I run this script regulary in addition to ZED (mostly because I found this script before I learned about ZED, but since ZFS is all about safekeeping, why not have redundancy in the error checkers? :) ) The script is from: calomel.org/zfs_health_check_script.html

      I have modified it to suit my needs and also to make it work properly (email), so maybe you would like to have a different version. You should change the <OMV_NAME> in the emails to your servers name. (Note, I may have forgotten to mark all my changes with the double #)

      Shell-Script

      1. #! /bin/sh
      2. #
      3. # Calomel.org
      4. # https://calomel.org/zfs_health_check_script.html
      5. # FreeBSD ZFS Health Check script
      6. # zfs_health.sh @ Version 0.17
      7. #Updates by Arve: ##
      8. ##Removed `hostname` from start of emailSubject as OMV does this automatically
      9. ##Changed email from <root@<OMV_NAME>.localdomain> to <root@<OMV_NAME>.LAN> to fit OMV3 (a bug maybe? missing environmental vars? also working: root, name.nameson@domain.end)
      10. # Check health of ZFS volumes and drives. On any faults send email.
      11. # 99 problems but ZFS aint one
      12. problems=0
      13. # Health - Check if all zfs volumes are in good condition. We are looking for
      14. # any keyword signifying a degraded or broken array.
      15. condition=$(/sbin/zpool status | egrep -i '(DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED|FAIL|DESTROYED|corrupt|cannot|unrecover)')
      16. if [ "${condition}" ]; then
      17. emailSubject="ZFS pool - HEALTH fault"
      18. problems=1
      19. fi
      20. ##Moved capacity check to bottom (don't need capacity warnings if pool errors exists...)
      21. # Errors - Check the columns for READ, WRITE and CKSUM (checksum) drive errors
      22. # on all volumes and all drives using "zpool status". If any non-zero errors
      23. # are reported an email will be sent out. You should then look to replace the
      24. # faulty drive and run "zpool scrub" on the affected volume after resilvering.
      25. if [ ${problems} -eq 0 ]; then
      26. errors=$(/sbin/zpool status | grep ONLINE | grep -v state | awk '{print $3 $4 $5}' | grep -v 000)
      27. if [ "${errors}" ]; then
      28. emailSubject="ZFS pool - Drive Errors"
      29. problems=1
      30. fi
      31. fi
      32. # Scrub Expired - Check if all volumes have been scrubbed in at least the last
      33. # 8 days. The general guide is to scrub volumes on desktop quality drives once
      34. # a week and volumes on enterprise class drives once a month. You can always
      35. # use cron to schedual "zpool scrub" in off hours. We scrub our volumes every
      36. # Sunday morning for example.
      37. #
      38. # Scrubbing traverses all the data in the pool once and verifies all blocks can
      39. # be read. Scrubbing proceeds as fast as the devices allows, though the
      40. # priority of any I/O remains below that of normal calls. This operation might
      41. # negatively impact performance, but the file system will remain usable and
      42. # responsive while scrubbing occurs. To initiate an explicit scrub, use the
      43. # "zpool scrub" command.
      44. #
      45. # The scrubExpire variable is in seconds. So for 8 days we calculate 8 days
      46. # times 24 hours times 3600 seconds to equal 691200 seconds.
      47. ##scrubExpire=691200
      48. ##
      49. ##if [ ${problems} -eq 0 ]; then
      50. ## currentDate=$(date +%s)
      51. ## zfsVolumes=$(/sbin/zpool list -H -o name)
      52. ##
      53. ## for volume in ${zfsVolumes}
      54. ## do
      55. ## if [ $(/sbin/zpool status $volume | egrep -c "none requested") -ge 1 ]; then
      56. ## printf "ERROR: You need to run \"zpool scrub $volume\" before this script can monitor the scrub expiration time."
      57. ## break
      58. ## fi
      59. ## if [ $(/sbin/zpool status $volume | egrep -c "scrub in progress|resilver") -ge 1 ]; then
      60. ## break
      61. ## fi
      62. ##
      63. ## ### Ubuntu with GNU supported date format
      64. ## #scrubRawDate=$(/sbin/zpool status $volume | grep scrub | awk '{print $11" "$12" " $13" " $14" "$15}')
      65. ## #scrubDate=$(date -d "$scrubRawDate" +%s)
      66. ##
      67. ## ### FreeBSD with *nix supported date format
      68. ## scrubRawDate=$(/sbin/zpool status $volume | grep scrub | awk '{print $15 $12 $13}')
      69. ## scrubDate=$(date -j -f '%Y%b%e-%H%M%S' $scrubRawDate'-000000' +%s)
      70. ##
      71. ## if [ $(($currentDate - $scrubDate)) -ge $scrubExpire ]; then
      72. ## emailSubject="ZFS pool - Scrub Time Expired. Scrub Needed on Volume(s)"
      73. ## problems=1
      74. ## fi
      75. ## done
      76. ##fi
      77. # Capacity - Make sure the pool capacity is below 80% for best performance. The
      78. # percentage really depends on how large your volume is. If you have a 128GB
      79. # SSD then 80% is reasonable. If you have a 60TB raid-z2 array then you can
      80. # probably set the warning closer to 95%.
      81. #
      82. # ZFS uses a copy-on-write scheme. The file system writes new data to
      83. # sequential free blocks first and when the uberblock has been updated the new
      84. # inode pointers become valid. This method is true only when the pool has
      85. # enough free sequential blocks. If the pool is at capacity and space limited,
      86. # ZFS will be have to randomly write blocks. This means ZFS can not create an
      87. # optimal set of sequential writes and write performance is severely impacted.
      88. maxCapacity=85
      89. if [ ${problems} -eq 0 ]; then
      90. capacity=$(/sbin/zpool list -H -o capacity | cut -d'%' -f1)
      91. for line in ${capacity}
      92. do
      93. if [ $line -ge $maxCapacity ]; then
      94. emailSubject="ZFS pool - Capacity Exceeded"
      95. zpool list | mail -s "$emailSubject" root@<OMV_NAME>.LAN ##Added email here for capacity issues (use "zpool list" in this email)
      96. fi
      97. done
      98. fi
      99. # Email - On any problems send email with drive status information and
      100. # capacities including a helpful subject line. Also use logger to write the
      101. # email subject to the local logs. This is also the place you may want to put
      102. # any other notifications like playing a sound file, beeping the internal
      103. # speaker, paging someone or updating Nagios or even BigBrother.
      104. if [ "$problems" -ne 0 ]; then
      105. ##OMV mail
      106. zpool status -v | mail -s "$emailSubject" root@<OMV_NAME>.LAN
      107. ##printf '%s\n' "$emailSubject" "" "`/sbin/zpool list`" "" "`/sbin/zpool status`" | /usr/bin/mail -s "$emailSubject" root@localhost
      108. ##logger $emailSubject
      109. fi
      110. ##log
      111. logger "ZFS Health check completed"
      112. ### EOF ###
      Display All
    • Hi guys,

      all the 8 disks got resilvered at the weekend. No problems at all. ZoL worked as expected. The resilvering for the first replacements from 4TB WD Red to 10TB WD Red took about 24 hours. It speeded up with every disk replacement. The last disk resilvering needed 15 hours. So the complete replacement of all 8 disks took a little bit more than a week.

      The pool expanded when the last disk got resilvered automtically (autoexpand=on).

      So, every thing is nice and performs as expected, but a little bit faster than before. :D

      Maybe the three points I described above can get solved in future versions of the plugin, but no reason to hurry. ;)

      ryecoaaron wrote:

      Why not install it from the testing repo? If I pushed 4.0.3 to the regular repo, it would be the exact same package. That said, I will probably push 4.0.4 today (minor dependencies changes to accommodate the proxmox kernel) to the regular repo.

      Did you push your proxmox changes already? I don't see a button to install the proxmox kernel. I am at latest versions:

      • openmediavault 4.1.8-1
      • openmediavault-omvextrasorg 4.1.8
      • openmediavault-zfs 4.0.4
      What shall we do to install the proxmox kernel correctly?


      Flaschie wrote:

      Regarding #3: It's not really a fix to your problem, but I run this script regulary in addition to ZED (mostly because I found this script before I learned about ZED, but since ZFS is all about safekeeping, why not have redundancy in the error checkers? :) ) The script is from: calomel.org/zfs_health_check_script.html

      I have modified it to suit my needs and also to make it work properly (email), so maybe you would like to have a different version. You should change the <OMV_NAME> in the emails to your servers name. (Note, I may have forgotten to mark all my changes with the double #)

      Shell-Script

      1. #! /bin/sh
      2. #
      3. # Calomel.org
      4. # https://calomel.org/zfs_health_check_script.html
      5. # FreeBSD ZFS Health Check script
      6. # zfs_health.sh @ Version 0.17
      7. #Updates by Arve: ##
      8. ##Removed `hostname` from start of emailSubject as OMV does this automatically
      9. ##Changed email from <root@<OMV_NAME>.localdomain> to <root@<OMV_NAME>.LAN> to fit OMV3 (a bug maybe? missing environmental vars? also working: root, name.nameson@domain.end)
      10. # Check health of ZFS volumes and drives. On any faults send email.
      11. # 99 problems but ZFS aint one
      12. problems=0
      13. # Health - Check if all zfs volumes are in good condition. We are looking for
      14. # any keyword signifying a degraded or broken array.
      15. condition=$(/sbin/zpool status | egrep -i '(DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED|FAIL|DESTROYED|corrupt|cannot|unrecover)')
      16. if [ "${condition}" ]; then
      17. emailSubject="ZFS pool - HEALTH fault"
      18. problems=1
      19. fi
      20. ##Moved capacity check to bottom (don't need capacity warnings if pool errors exists...)
      21. # Errors - Check the columns for READ, WRITE and CKSUM (checksum) drive errors
      22. # on all volumes and all drives using "zpool status". If any non-zero errors
      23. # are reported an email will be sent out. You should then look to replace the
      24. # faulty drive and run "zpool scrub" on the affected volume after resilvering.
      25. if [ ${problems} -eq 0 ]; then
      26. errors=$(/sbin/zpool status | grep ONLINE | grep -v state | awk '{print $3 $4 $5}' | grep -v 000)
      27. if [ "${errors}" ]; then
      28. emailSubject="ZFS pool - Drive Errors"
      29. problems=1
      30. fi
      31. fi
      32. # Scrub Expired - Check if all volumes have been scrubbed in at least the last
      33. # 8 days. The general guide is to scrub volumes on desktop quality drives once
      34. # a week and volumes on enterprise class drives once a month. You can always
      35. # use cron to schedual "zpool scrub" in off hours. We scrub our volumes every
      36. # Sunday morning for example.
      37. #
      38. # Scrubbing traverses all the data in the pool once and verifies all blocks can
      39. # be read. Scrubbing proceeds as fast as the devices allows, though the
      40. # priority of any I/O remains below that of normal calls. This operation might
      41. # negatively impact performance, but the file system will remain usable and
      42. # responsive while scrubbing occurs. To initiate an explicit scrub, use the
      43. # "zpool scrub" command.
      44. #
      45. # The scrubExpire variable is in seconds. So for 8 days we calculate 8 days
      46. # times 24 hours times 3600 seconds to equal 691200 seconds.
      47. ##scrubExpire=691200
      48. ##
      49. ##if [ ${problems} -eq 0 ]; then
      50. ## currentDate=$(date +%s)
      51. ## zfsVolumes=$(/sbin/zpool list -H -o name)
      52. ##
      53. ## for volume in ${zfsVolumes}
      54. ## do
      55. ## if [ $(/sbin/zpool status $volume | egrep -c "none requested") -ge 1 ]; then
      56. ## printf "ERROR: You need to run \"zpool scrub $volume\" before this script can monitor the scrub expiration time."
      57. ## break
      58. ## fi
      59. ## if [ $(/sbin/zpool status $volume | egrep -c "scrub in progress|resilver") -ge 1 ]; then
      60. ## break
      61. ## fi
      62. ##
      63. ## ### Ubuntu with GNU supported date format
      64. ## #scrubRawDate=$(/sbin/zpool status $volume | grep scrub | awk '{print $11" "$12" " $13" " $14" "$15}')
      65. ## #scrubDate=$(date -d "$scrubRawDate" +%s)
      66. ##
      67. ## ### FreeBSD with *nix supported date format
      68. ## scrubRawDate=$(/sbin/zpool status $volume | grep scrub | awk '{print $15 $12 $13}')
      69. ## scrubDate=$(date -j -f '%Y%b%e-%H%M%S' $scrubRawDate'-000000' +%s)
      70. ##
      71. ## if [ $(($currentDate - $scrubDate)) -ge $scrubExpire ]; then
      72. ## emailSubject="ZFS pool - Scrub Time Expired. Scrub Needed on Volume(s)"
      73. ## problems=1
      74. ## fi
      75. ## done
      76. ##fi
      77. # Capacity - Make sure the pool capacity is below 80% for best performance. The
      78. # percentage really depends on how large your volume is. If you have a 128GB
      79. # SSD then 80% is reasonable. If you have a 60TB raid-z2 array then you can
      80. # probably set the warning closer to 95%.
      81. #
      82. # ZFS uses a copy-on-write scheme. The file system writes new data to
      83. # sequential free blocks first and when the uberblock has been updated the new
      84. # inode pointers become valid. This method is true only when the pool has
      85. # enough free sequential blocks. If the pool is at capacity and space limited,
      86. # ZFS will be have to randomly write blocks. This means ZFS can not create an
      87. # optimal set of sequential writes and write performance is severely impacted.
      88. maxCapacity=85
      89. if [ ${problems} -eq 0 ]; then
      90. capacity=$(/sbin/zpool list -H -o capacity | cut -d'%' -f1)
      91. for line in ${capacity}
      92. do
      93. if [ $line -ge $maxCapacity ]; then
      94. emailSubject="ZFS pool - Capacity Exceeded"
      95. zpool list | mail -s "$emailSubject" root@<OMV_NAME>.LAN ##Added email here for capacity issues (use "zpool list" in this email)
      96. fi
      97. done
      98. fi
      99. # Email - On any problems send email with drive status information and
      100. # capacities including a helpful subject line. Also use logger to write the
      101. # email subject to the local logs. This is also the place you may want to put
      102. # any other notifications like playing a sound file, beeping the internal
      103. # speaker, paging someone or updating Nagios or even BigBrother.
      104. if [ "$problems" -ne 0 ]; then
      105. ##OMV mail
      106. zpool status -v | mail -s "$emailSubject" root@<OMV_NAME>.LAN
      107. ##printf '%s\n' "$emailSubject" "" "`/sbin/zpool list`" "" "`/sbin/zpool status`" | /usr/bin/mail -s "$emailSubject" root@localhost
      108. ##logger $emailSubject
      109. fi
      110. ##log
      111. logger "ZFS Health check completed"
      112. ### EOF ###
      Display All

      Thanks for the script. Will have a look at it, if I find the time. ;)


      Regards Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - debian | kernel 4.4 lts | proxmox | openmediavault | zfs raid-z2 | docker | emby | vdr | vnsi | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • hoppel118 wrote:

      Did you push your proxmox changes already? I don't see a button to install the proxmox kernel. I am at latest versions:
      What shall we do to install the proxmox kernel correctly?
      Yes, I did. You probably need to clear your browser cache. After that, the instructions on the kernel tab should be self-explanatory.
      omv 4.1.8.2 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!
    • Yes, after clearing the browser cache, I can see the section „proxmox kernel“. Well done, thanks!
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - debian | kernel 4.4 lts | proxmox | openmediavault | zfs raid-z2 | docker | emby | vdr | vnsi | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------