[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

  • znapzend

    I'm doing some searching on this... Have you done a write up at all?


    I see earlier you installed directly on OMV from CLI (RE: [HOWTO] Instal ZFS-Plugin & use ZFS on OMV)


    Is that still your preferred setup or are you running from docker?

    https://github.com/oetiker/zna…E.md#running-in-container


    Thanks!

    Dell 1950 III, E5420 2.5ghz LV processors, 64gb ram

    2x M1015 / LSi 9220-8i > 2x IBM 46M0997

    2x Supermicro X5DPE-G2 3U 16 BAY servers - gutted and only used for backplane/caddies

    SuperMicro CSE-PTJBOD-CB2
    2x Supermicro PDB-PT825-8824 power distribution
    4x Supermicro PWS-920P-SQ switching power supply
    18x Seagate Barracuda ES 750GB (9BL148) 6x Samsung HD103SJ 1TB

  • I am still on this quite old OMV 3 version. znapzend is running flawlessly.

    Is that still your preferred setup or are you running from docker?

    I have no experience with znapzend in a docker installation. In the post you have linked there is a another link where a precompiled znapzend package from Gregy can be downloaded. OMV 5 is based on Debian 10 and a znapzend package is also available for that.

    OMV 3.0.99 (Gray style)
    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304

  • Hi everyone,

    tonight I recieved this notification:

    What doesit mean? Do I have to replace sde? How can I know which file is affected? By this error?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • revise SMART info about sde, and do a zpool clear to "clear" the warnning, and if still you want , do a zpool scrub, but if your disk are in prefail this can destroy you data, so first be sure that all SMART parameters are OK.

  • SMART seems fine. I run a long test this morning to be sure.

    Also I'll run a backup tomorrow just to be sure.

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • I've had scrubs where 256K was repaired, but with no errors, corrected or uncorrected, so far.

    At the end of your log report, it states "no known data" errors. If SMART looks OK, I wouldn't worry but I would keep an eye on it.


    I followed the link provided in your log out. http://zfsonlinux.org/msg/ZFS-8000-9P

    The following, from the link above, seems applicable:
    __________________________________________________________________________________________________

    If these errors persist over a period of time, ZFS may determine the device is faulty and mark it as such. However, these error counts may or may not indicate that the device is unusable. It depends on how the errors were caused, which the administrator can determine in advance of any ZFS diagnosis. For example, the following cases will all produce errors that do not indicate potential device failure:

    • A network attached device lost connectivity but has now recovered
    • A device suffered from a bit flip, an expected event over long periods of time
    • An administrator accidentally wrote over a portion of the disk using another program

    __________________________________________________________________________________________________

    Any of the above could have happened during the scrub. The latter admin accidentally "wrote over" might have been the result of something automated.

  • I am still on this quite old OMV 3 version. znapzend is running flawlessly.

    I have no experience with znapzend in a docker installation. In the post you have linked there is a another link where a precompiled znapzend package from Gregy can be downloaded. OMV 5 is based on Debian 10 and a znapzend package is also available for that.

    Thanks!


    I had a drive drop out (listed as unavail). I remember seeing you or others post about using serial number id's and was reading more about setting up /etc/zfs/vdev_id.conf (also here) and was wondering if anyone with larger arrays has tried the by channel/slot method?


    I would like to see disks in vdev as something like Ua0, Ub0 (upper jbod, channel 1, disc 0 and disk 1) Lc3, Ld3 (lower jbod, channel 4, disk 2 and disk 3)


    I've been playing with it but I'm at a bit of a loss. I think the slot mapping is required because not specifying/using the defaults I'm getting duplicate Ids/phys...


    (note, all outputs shortened to avoid 10k char msg len restriction)

    Code
    # ls /dev/disk/by-id/
    ata-SAMSUNG_HD103SJ_S246J9GB102368 ata-ST3750640NS_5QD4EAYM
    ata-SAMSUNG_HD103SJ_S246J9GB102368-part1 ata-ST3750640NS_5QD4EAYM-part1
    ata-SAMSUNG_HD103SJ_S246J9GB102368-part2 ata-ST3750640NS_5QD4EAYM-part2
    ...





    My topology: 2x M1015/ LSi 9220-8i both ports on each M1015 are connected to an IBM 46M0997 expander.. Each expander connects a 16 slot backplane

    so PCI slot


    Code
    0a:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
    0c:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)




    and end devices


    Code
    # ls -l /sys/class/sas_end_device
    total 0
    lrwxrwxrwx 1 root root 0 Jun 11 15:38 end_device-2:0:0 -> ../../devices/pci0000:00/0000:00:04.0/0000:0a:00.0/host2/port-2:0/expander-2:0/port-2:0:0/end_device-2:0:0/sas_end_device/end_device-2:0:0
    lrwxrwxrwx 1 root root 0 Jun 11 15:38 end_device-2:0:1 -> ../../devices/pci0000:00/0000:00:04.0/0000:0a:00.0/host2/port-2:0/expander-2:0/port-2:0:1/end_device-2:0:1/sas_end_device/end_device-2:0:1
    lrwxrwxrwx 1 root root 0 Jun 11 15:38 end_device-3:0:0 -> ../../devices/pci0000:00/0000:00:06.0/0000:0c:00.0/host3/port-3:0/expander-3:0/port-3:0:0/end_device-3:0:0/sas_end_device/end_device-3:0:0
    lrwxrwxrwx 1 root root 0 Jun 11 15:38 end_device-3:0:1 -> ../../devices/pci0000:00/0000:00:06.0/0000:0c:00.0/host3/port-3:0/expander-3:0/port-3:0:1/end_device-3:0:1/sas_end_device/end_device-3:0:1
    ..



    Thanks!

    Dell 1950 III, E5420 2.5ghz LV processors, 64gb ram

    2x M1015 / LSi 9220-8i > 2x IBM 46M0997

    2x Supermicro X5DPE-G2 3U 16 BAY servers - gutted and only used for backplane/caddies

    SuperMicro CSE-PTJBOD-CB2
    2x Supermicro PDB-PT825-8824 power distribution
    4x Supermicro PWS-920P-SQ switching power supply
    18x Seagate Barracuda ES 750GB (9BL148) 6x Samsung HD103SJ 1TB

  • Did you have created a pool already? What is the output of zpool status?


    I have created my pool with the /dev/disk/by-id/ method, so that the drives can be identified by their serial numbers.

    OMV 3.0.99 (Gray style)
    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304

  • Yes pool is created.


    With 32 slots I'd rather use slot targets...

    Dell 1950 III, E5420 2.5ghz LV processors, 64gb ram

    2x M1015 / LSi 9220-8i > 2x IBM 46M0997

    2x Supermicro X5DPE-G2 3U 16 BAY servers - gutted and only used for backplane/caddies

    SuperMicro CSE-PTJBOD-CB2
    2x Supermicro PDB-PT825-8824 power distribution
    4x Supermicro PWS-920P-SQ switching power supply
    18x Seagate Barracuda ES 750GB (9BL148) 6x Samsung HD103SJ 1TB

  • Hi everyone,

    because of some blackout I had in the last few days I now have some checksum errors on my ZFS raidz1 .


    This is the message that I got tonight.

    What should I do now? Do I need to backup everything and recreate the pool?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • Hi everyone,

    because of some blackout I had in the last few days I now have some checksum errors on my ZFS raidz1 .


    This is the message that I got tonight.

    What should I do now? Do I need to backup everything and recreate the pool?

    yes, probably.


    try first to unplug & replug the disk, and re run a scrub, perhaps some data can be safe, but the message is clear.



    try to Zpool status -v to see more info: https://docs.oracle.com/cd/E19…820-2314/gbcve/index.html



    more interesting links.


    https://docs.oracle.com/cd/E19…819-5461/gbctx/index.html


    https://docs.oracle.com/cd/E19…819-5461/gbcuz/index.html


    https://docs.oracle.com/cd/E19…819-5461/gbbzv/index.html

  • I cleared the errors and launched a new scrub. I don't get why the file is indicated as corrputed. I opened it and it worked.

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • NAS: OMV 5➕kernel Linux 5.4.44-2-pve
    (Intel i5 4570❄Gigabyte GA-H97N-WIFI❄8GB DDR3❄SSD EVO850➕WD Red 3TB➕WD Red 6TB➕WD Gold 8TB)
    Gigabit Internet➕Mikrotik hAP ac²

  • ZFS ... took 3-4 GB of RAM always and 5-6 during some operations,

    Yes this is true. ZFS uses per default half of the RAM for file caching. But this is no drawback in your case. Because this RAM is physically available in your system, then it is used for ZFS. Why shouldn´t it be used, if it is there?

    with EXT4 and XFS I have always used 1-2GB of RAM and much less electricity consumption.

    This is new for me that there is a direct correlation between the used file system and the power consumption. The only reason I can imagine is the regular ZFS scrub with heavy disk activity which is performed every 2 weeks automatically. Depending on the amount of data it takes several hours to complete.

    This is the price for data consistency you may will to pay.

    OMV 3.0.99 (Gray style)
    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304

  • If ram is available and unused, ZFS uses it. That's not to say ZFS "demands" ram from the system. If needed, for other purposes, ZFS will give RAM back to the system. It's better to utilize unused ram for file caching, which is exactly what ZFS does, rather than letting it sit idle and unused.

    I have have an Intel Atom box with 4GB ram, that runs a 4TB ZFS mirror just fine.

  • Guys, from your responses I can see that ZFS is very raw, because on my system it was as I mentioned before and also in Glances I could see many errors regarding CPU and memory which I don`t see now when I have XFS and EXT4 drives.


    It`s good that you can use it without any issues, but in my cases EXT4 is better, moreover I noticed that with EXT4 my drive responses faster then on ZFS.

    NAS: OMV 5➕kernel Linux 5.4.44-2-pve
    (Intel i5 4570❄Gigabyte GA-H97N-WIFI❄8GB DDR3❄SSD EVO850➕WD Red 3TB➕WD Red 6TB➕WD Gold 8TB)
    Gigabit Internet➕Mikrotik hAP ac²

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!