ZFS not writing to disk?

  • So I'm starting to move data to my OMV machine.


    I've moved about 16gb of data initially, Looking at memory usage on the box it shows about 52% of memory used, that number is not going down. and it doesn't appear to be writing it to disk...


    Ideas? Or what am I missing here?

    Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
    PSU: EVGA 500w
    Case: Fractal Design Define R5
    Controller : Dell H310 flashed with IT Mode Firmware
    DATA: 8x3TB

  • So, I think it might be writing to disk... I rebooted it. The files are there still....


    What am I missing here?

    Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
    PSU: EVGA 500w
    Case: Fractal Design Define R5
    Controller : Dell H310 flashed with IT Mode Firmware
    DATA: 8x3TB

    • Offizieller Beitrag

    So I'm starting to move data to my OMV machine.


    I've moved about 16gb of data initially, Looking at memory usage on the box it shows about 52% of memory used, that number is not going down. and it doesn't appear to be writing it to disk...


    Ideas? Or what am I missing here?

    Really vague post.......when you use the word memory is it ram or storage you're referring to?
    Check the storage usage in the zfs plugin or in fs section

    • Offizieller Beitrag

    Memory use is ram not filesystem use. Is there a reason you don't believe the files are being written?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • To answer both of you.


    I am referring to RAM.


    System at idle uses 1% of RAM, I move 16gb of data, it is then using 52% of RAM & Then just sits there. No change. Doesn't decrease.


    I mean now that I've rebooted it I know that the files are being written because if it was holding it in RAM it would've disappeared on reboot.

    Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
    PSU: EVGA 500w
    Case: Fractal Design Define R5
    Controller : Dell H310 flashed with IT Mode Firmware
    DATA: 8x3TB

    • Offizieller Beitrag

    System at idle uses 1% of RAM, I move 16gb of data, it is then using 52% of RAM & Then just sits there. No change. Doesn't decrease.

    Linux uses ram when it is available. You really don't have to worry unless it is at 100% and swapping.


    I know that the files are being written because if it was holding it in RAM it would've disappeared on reboot.

    Why would it hold files in ram?? Trust me, that isn't happening.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I mean what other reason would my machine go from 1% memory used to 52% memory used after moving 16gb of files and then that value never drop down?


    I mean I was used to it with storage spaces while writing data it would go to the memory then write to disk but didn't think that was how ZFS worked.

    Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
    PSU: EVGA 500w
    Case: Fractal Design Define R5
    Controller : Dell H310 flashed with IT Mode Firmware
    DATA: 8x3TB

  • ZFS really does like memory though. I have heard recommendations tossed around for best practices and the word on the street is something like 1GB of ram per TB of data. Obviously you dont have to have this much but having extra is great for its caching system. Also ZFS will try to use as much as you let it (without depleting 100% of course).


    edit: also what you are describing is perfectly normal.


    edit2: ram will free up as other things need it.


    edit3: wow I just cant compile my thoughts. Its not staying in memory - like others mentioned it is being written to disk. The ARC is was is biting into your ram, but trust ZFS it helps your overall performance. Heres a quick piece about ARC - https://www.zfsbuild.com/2010/…anation-of-arc-and-l2arc/


  • Thanks for the info, Much appreciated.


    I knew ZFS COULD be a RAM hog but It's been so long since I've actually used something that utilized ZFS that I hadn't really had an opportunity to see that.

    Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
    PSU: EVGA 500w
    Case: Fractal Design Define R5
    Controller : Dell H310 flashed with IT Mode Firmware
    DATA: 8x3TB

    • Offizieller Beitrag

    I've moved about 16gb of data initially, Looking at memory usage on the box it shows about 52% of memory used, that number is not going down. and it doesn't appear to be writing it to disk...


    Ideas? Or what am I missing here?

    Note that ZFS uses adaptive caching, meaning it keeps frequently accessed files in memory. From a performance aspect, this is a good thing but it may also give the appearance that memory utilization is "stuck". ZFS is also designed to cooperatively share RAM and will release RAM, if the kernel needs it. But, if RAM is free and available, ZFS tends to use it.


    If you're doing large data copies or rsync operations, increased memory utilization is normal. In essence, ZFS is actually utilizing RAM resources in a useful way, rather than letting it sit idle. In any case, as ryeccaron has stated, if you're not hitting 100% and your swap, there's nothing to worry about.

    • Offizieller Beitrag

    Before you get too far into setting up your pool, note that in the Linux world, you'll need the following.
    (This can be done in the GUI but it's easier on the command line.)


    Replace "yourpoolname" with whatever you named your pool.


    Code
    zfs set aclinherit=passthrough yourpoolname
    zfs set acltype=posixacl yourpoolname
    zfs set xattr=sa yourpoolname
    zfs set compression=lz4 yourpoolname


    While the last line is optional, the first 3 are highly recommended for ZFS on Linux.

  • And what do these do exactly?
    I know what the compression will do atleast.



    I already have posixacl attrib set.

    Motherboard : SuperMicro X10SLL-F | i3-4170 @ 3.7ghz | 32gb ECC RAM
    PSU: EVGA 500w
    Case: Fractal Design Define R5
    Controller : Dell H310 flashed with IT Mode Firmware
    DATA: 8x3TB

    Einmal editiert, zuletzt von captainwtf ()

    • Offizieller Beitrag

    And what do these do exactly?
    I know what the compression will do atleast.



    I already have posixacl attrib set.

    You have basic linux permissions with posixacl. (At least close.) xattr is for extended file attributes. aclinherit=passthrough ("inherit") makes a permission inheritance change that's not common on Solaris (where ZFS was born). The combination of the three give you a close approximation to Linux permissions and file attributes.


    Also, it's important to note that if some of these variables are not set before you copy data onto your pool, ZFS will not change what has already been written. You'll have mixed attributes. (That's why I mentioned it while you're testing your pool.)


    When you install server app's like Plex and others, these app's create system users that expect to read files in Linux format. If assigned permissions and attributes do not equate to Linux norm's,, well,, odd things can happen.


    Your call.

  • I have heard recommendations tossed around for best practices and the word on the street is something like 1GB of ram per TB of data

    'The word on the street' is about ZFS used with deduplication. If you don't use dedup the number above is complete BS and that's also how I would call this when this is applied to ZFS in general. Same BS with 'ZFS can only be used with ECC memory' BTW...


    On home boxes where not 15 VMs are accessing the same storage pool or 200 users the server ZFS works almost as fast with almost no RAM than with '1GB per TB of data'. Since the positive side effects of caching do not apply anyway. You push few GB on your ZFS NAS box, as long as there's enough RAM they remain in memory for further accesses that happen days/weeks later.

  • I knew ZFS COULD be a RAM hog

    Yes, there were issues back in 2009 on Solaris. Long fixed and if you do NOT use deduplication there's nothing you need to worry about.


    ZFS on Linux has one cosmetic problem: it's RAM usage is displayed differently. If you have a box with 32 GB RAM and push 16 GB of data on this box the data remains in RAM regardless whether you use ZFS or ext4 or XFS or whatever as filesystem. The difference is: with all the 'normal' filesystems this RAM usage is called 'filesystem buffers/caches' and not added to 'used memory' while ZFS's own implementation adds to 'used memory'. So in any case you end up with all the written data remaining in RAM unless RAM is needed for something else but with ZFS it's displayed as '52% memory used' while with other filesystems you get '3% memory used' displayed.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!