ZFS not writing to disk?

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • ZFS not writing to disk?

      So I'm starting to move data to my OMV machine.

      I've moved about 16gb of data initially, Looking at memory usage on the box it shows about 52% of memory used, that number is not going down. and it doesn't appear to be writing it to disk...

      Ideas? Or what am I missing here?
      Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
      PSU: EVGA 500w
      Case: Fractal Design Define R5
      Controller : Dell H310 flashed with IT Mode Firmware
      DATA: 8x3TB
    • captainwtf wrote:

      So I'm starting to move data to my OMV machine.

      I've moved about 16gb of data initially, Looking at memory usage on the box it shows about 52% of memory used, that number is not going down. and it doesn't appear to be writing it to disk...

      Ideas? Or what am I missing here?
      Really vague post.......when you use the word memory is it ram or storage you're referring to?
      Check the storage usage in the zfs plugin or in fs section
      New wiki
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • Memory use is ram not filesystem use. Is there a reason you don't believe the files are being written?
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • To answer both of you.

      I am referring to RAM.

      System at idle uses 1% of RAM, I move 16gb of data, it is then using 52% of RAM & Then just sits there. No change. Doesn't decrease.

      I mean now that I've rebooted it I know that the files are being written because if it was holding it in RAM it would've disappeared on reboot.
      Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
      PSU: EVGA 500w
      Case: Fractal Design Define R5
      Controller : Dell H310 flashed with IT Mode Firmware
      DATA: 8x3TB
    • captainwtf wrote:

      System at idle uses 1% of RAM, I move 16gb of data, it is then using 52% of RAM & Then just sits there. No change. Doesn't decrease.
      Linux uses ram when it is available. You really don't have to worry unless it is at 100% and swapping.

      captainwtf wrote:

      I know that the files are being written because if it was holding it in RAM it would've disappeared on reboot.
      Why would it hold files in ram?? Trust me, that isn't happening.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I mean what other reason would my machine go from 1% memory used to 52% memory used after moving 16gb of files and then that value never drop down?

      I mean I was used to it with storage spaces while writing data it would go to the memory then write to disk but didn't think that was how ZFS worked.
      Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
      PSU: EVGA 500w
      Case: Fractal Design Define R5
      Controller : Dell H310 flashed with IT Mode Firmware
      DATA: 8x3TB
    • ZFS really does like memory though. I have heard recommendations tossed around for best practices and the word on the street is something like 1GB of ram per TB of data. Obviously you dont have to have this much but having extra is great for its caching system. Also ZFS will try to use as much as you let it (without depleting 100% of course).

      edit: also what you are describing is perfectly normal.

      edit2: ram will free up as other things need it.

      edit3: wow I just cant compile my thoughts. Its not staying in memory - like others mentioned it is being written to disk. The ARC is was is biting into your ram, but trust ZFS it helps your overall performance. Heres a quick piece about ARC - zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/

      The post was edited 1 time, last by Lazurixx ().

    • Lazurixx wrote:

      ZFS really does like memory though. I have heard recommendations tossed around for best practices and the word on the street is something like 1GB of ram per TB of data. Obviously you dont have to have this much but having extra is great for its caching system. Also ZFS will try to use as much as you let it (without depleting 100% of course).

      edit: also what you are describing is perfectly normal.

      edit2: ram will free up as other things need it.

      edit3: wow I just cant compile my thoughts. Its not staying in memory - like others mentioned it is being written to disk. The ARC is was is biting into your ram, but trust ZFS it helps your overall performance. Heres a quick piece about ARC - zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/

      Thanks for the info, Much appreciated.

      I knew ZFS COULD be a RAM hog but It's been so long since I've actually used something that utilized ZFS that I hadn't really had an opportunity to see that.
      Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
      PSU: EVGA 500w
      Case: Fractal Design Define R5
      Controller : Dell H310 flashed with IT Mode Firmware
      DATA: 8x3TB
    • captainwtf wrote:

      I've moved about 16gb of data initially, Looking at memory usage on the box it shows about 52% of memory used, that number is not going down. and it doesn't appear to be writing it to disk...

      Ideas? Or what am I missing here?
      Note that ZFS uses adaptive caching, meaning it keeps frequently accessed files in memory. From a performance aspect, this is a good thing but it may also give the appearance that memory utilization is "stuck". ZFS is also designed to cooperatively share RAM and will release RAM, if the kernel needs it. But, if RAM is free and available, ZFS tends to use it.

      If you're doing large data copies or rsync operations, increased memory utilization is normal. In essence, ZFS is actually utilizing RAM resources in a useful way, rather than letting it sit idle. In any case, as ryeccaron has stated, if you're not hitting 100% and your swap, there's nothing to worry about.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.13, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • Before you get too far into setting up your pool, note that in the Linux world, you'll need the following.
      (This can be done in the GUI but it's easier on the command line.)

      Replace "yourpoolname" with whatever you named your pool.

      Source Code

      1. zfs set aclinherit=passthrough yourpoolname
      2. zfs set acltype=posixacl yourpoolname
      3. zfs set xattr=sa yourpoolname
      4. zfs set compression=lz4 yourpoolname

      While the last line is optional, the first 3 are highly recommended for ZFS on Linux.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.13, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • captainwtf wrote:

      And what do these do exactly?
      I know what the compression will do atleast.


      I already have posixacl attrib set.
      You have basic linux permissions with posixacl. (At least close.) xattr is for extended file attributes. aclinherit=passthrough ("inherit") makes a permission inheritance change that's not common on Solaris (where ZFS was born). The combination of the three give you a close approximation to Linux permissions and file attributes.

      Also, it's important to note that if some of these variables are not set before you copy data onto your pool, ZFS will not change what has already been written. You'll have mixed attributes. (That's why I mentioned it while you're testing your pool.)

      When you install server app's like Plex and others, these app's create system users that expect to read files in Linux format. If assigned permissions and attributes do not equate to Linux norm's,, well,, odd things can happen.

      Your call.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.13, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk

      The post was edited 1 time, last by flmaxey ().

    • Lazurixx wrote:

      I have heard recommendations tossed around for best practices and the word on the street is something like 1GB of ram per TB of data
      'The word on the street' is about ZFS used with deduplication. If you don't use dedup the number above is complete BS and that's also how I would call this when this is applied to ZFS in general. Same BS with 'ZFS can only be used with ECC memory' BTW...

      On home boxes where not 15 VMs are accessing the same storage pool or 200 users the server ZFS works almost as fast with almost no RAM than with '1GB per TB of data'. Since the positive side effects of caching do not apply anyway. You push few GB on your ZFS NAS box, as long as there's enough RAM they remain in memory for further accesses that happen days/weeks later.
    • captainwtf wrote:

      I knew ZFS COULD be a RAM hog
      Yes, there were issues back in 2009 on Solaris. Long fixed and if you do NOT use deduplication there's nothing you need to worry about.

      ZFS on Linux has one cosmetic problem: it's RAM usage is displayed differently. If you have a box with 32 GB RAM and push 16 GB of data on this box the data remains in RAM regardless whether you use ZFS or ext4 or XFS or whatever as filesystem. The difference is: with all the 'normal' filesystems this RAM usage is called 'filesystem buffers/caches' and not added to 'used memory' while ZFS's own implementation adds to 'used memory'. So in any case you end up with all the written data remaining in RAM unless RAM is needed for something else but with ZFS it's displayed as '52% memory used' while with other filesystems you get '3% memory used' displayed.