ZFS Pool Size

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • ZFS Pool Size

      I'm using (8) 8TB HDD's for my zfs pool. OMV picks up these drives as 7.28TB each in capacity. If I use the Raid-Z2 configuration, shouldn't that yield me 7.28*6=43.68TB? OMV is only showing 39.9 which is a loss of over 3TB.
      Case: U-NAS NSC-810
      Motherboard: ASRock - C236 WSI Mini ITX
      CPU: Core i7-6700
      Memory: 32GB Crucial DDR4-2133
    • I hear you; I have a ZFS mirror of 2x4TB, that OMV picks up as 3.64TB.

      ZFS shows my pool capacity as 3.51TB so we're talking about, roughly, a 3 to 4% loss of capacity to (I'm guessing), the file system's metadata overhead. (Since I'm using a mirror, the metadata is probably equivalent to a single unstriped drive.)

      File checksum data,and other ZFS attributes would have to be stored somewhere and one could imagine that this requirement expands proportionally with the size of the media. So again, in single drives (or mirrors), I'm assuming the 3 to 4% is being utilized for metadata.

      Assuming a loss of 4% to file system metadata, from unformatted capacity:
      A single drive at 7.28TB would format to ZFS at roughly (4%) 6.84TB. With 6 drives, the difference is 2.6TB. With that noted, this rough figure (4%) is for a mirror, or a single drive. The additional loss that you're experiencing might be explained by additional metadata required to keep track of striping and, potentially, LVM functions.

      Admittedly, this is speculation but it makes sense.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup Server:
      OMV 4.1.8.2-1, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB+4TB Rsync'ed disks+SNAPRAID
      2nd Data Backup
      R-PI 2B, 16GB boot, 4TB WD MyPassport
    • flmaxey wrote:

      I hear you; I have a ZFS mirror of 2x4TB, that OMV picks up as 3.64TB.

      ZFS shows my pool capacity as 3.51TB so we're talking about, roughly, a 3 to 4% loss of capacity to (I'm guessing), the file system's metadata overhead. (Since I'm using a mirror, the metadata is probably equivalent to a single unstriped drive.)

      File checksum data,and other ZFS attributes would have to be stored somewhere and one could imagine that this requirement expands proportionally with the size of the media. So again, in single drives (or mirrors), I'm assuming the 3 to 4% is being utilized for metadata.

      Assuming a loss of 4% to file system metadata, from unformatted capacity:
      A single drive at 7.28TB would format to ZFS at roughly (4%) 6.84TB. With 6 drives, the difference is 2.6TB. With that noted, this rough figure (4%) is for a mirror, or a single drive. The additional loss that you're experiencing might be explained by additional metadata required to keep track of striping and, potentially, LVM functions.

      Admittedly, this is speculation but it makes sense.
      Thanks for the reply.

      And ZFS wants us to maintain 20% free space? That's gonna be tough :D
      Case: U-NAS NSC-810
      Motherboard: ASRock - C236 WSI Mini ITX
      CPU: Core i7-6700
      Memory: 32GB Crucial DDR4-2133
    • elastic wrote:

      And ZFS wants us to maintain 20% free space? That's gonna be tough :D
      Yeah, and from the looks of that slick looking case you have:
      I'll take it you're not inclined to adding a sas card, and using tin snips to run the cables out the side to an expansion chassis, right? :)

      I looked up your case. That's a nice box for a NAS.
      ______________________________________________________

      BTW, after posting, I went looking for file system overhead figures. I didn't find anything for ZFS which seemed odd, but I did find this. It's out of date but the figures are interesting.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup Server:
      OMV 4.1.8.2-1, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB+4TB Rsync'ed disks+SNAPRAID
      2nd Data Backup
      R-PI 2B, 16GB boot, 4TB WD MyPassport
    • flmaxey wrote:

      elastic wrote:

      And ZFS wants us to maintain 20% free space? That's gonna be tough :D
      Yeah, and from the looks of that slick looking case you have:I'll take it you're not inclined to adding a sas card, and using tin snips to run the cables out the side to an expansion chassis, right? :)

      I looked up your case. That's a nice box for a NAS.
      ______________________________________________________

      BTW, after posting, I went looking for file system overhead figures. I didn't find anything for ZFS which seemed odd, but I did find this. It's out of date but the figures are interesting.
      Interesting find. And nah, no sas cards :P Besides, I need to stop buying so many HDD's, my wife is going to kill me if I keep it up :P
      Case: U-NAS NSC-810
      Motherboard: ASRock - C236 WSI Mini ITX
      CPU: Core i7-6700
      Memory: 32GB Crucial DDR4-2133
    • Hi !

      I have 7x2To HDD (1,82To real) in a ZFS RAIDZ2 pool. This gives me about 8,2To off free space.

      That doc should help you a lot. You can see the parity cost depending on the number of disks vs RAIDZ you can choose, and you can then set the best block size.

      docs.google.com/spreadsheets/d…edit?pli=1#gid=2126998674

      For me, the best is 15 block/sector (40% lost), but I choose 128 block/sector (41% lost) as it suits me better.

      I have made a try with 4096/sector, and each file cost me x2 the real size ! So it is important to choose the right block size.
      Anyway, every change is taken for the new created file, so if you change the block/sector size, do it at the beginning or you'll have to cut/paste all your files to another drive to your pool.

      Just my 2 cents from my recent runs with ZFS...
      Lian Li PC-V354 with Be Quiet fans | Gigabyte GA-G33M-DS2R | Intel E8400@3,6Ghz | 6GB DDR2 RAM
      1x500MB SSD for System/Backup | 7x2To HDD with ZFS RAIDz2 for Datas/Snapshots
      Powered by OMV v4.1.7 / Kernel 4.16.x / ZFS 0.7.9

      The post was edited 4 times, last by sbocquet ().

    • Relatively speaking, it seems 3 to 4%, unstriped, is a "slender" amount of overhead. (Of course, in a mirror, I have a rather stiff 50% loss, right at the start, so it's more like 54%.)

      In any case, I'm using ZFS for bitrot protection, not for aggregating drives.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup Server:
      OMV 4.1.8.2-1, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB+4TB Rsync'ed disks+SNAPRAID
      2nd Data Backup
      R-PI 2B, 16GB boot, 4TB WD MyPassport