Concerns about installing Flashmemory plugin

  • I'm building an OMV server using an SSD as boot/system drive. The SSD is an SLC industrial drive so it should be able to take lots of writes (although the exact number is not specified). So to keep unnecessary writes to a minimum, obviously, I want to install the flashmemory plugin.


    However, I'm having second thoughts. According to this post @ryecoaaron says that writes to the boot drive is only performed at shutdown. This concerns me, as I do not intend to shutdown/boot the server at regular intervals. Thus a power failure would cause all new log entries to be lost.


    I would really like to have an option for a regular write to the system drive. I certainly do not think it defeats the purpose of the plugin. There is still a huge difference between 'constantly writing to the system drive' and 'writing e.g. once a day'


    Is there any chance that such feature could be added to the plugin?
    Alternatively is there a work-around? E.g. some script which is executed once a day to make a save to the system drive?

  • I agree with you. I would love to have the option to choose how often writes are done and a field in the plugin to assign it. It is the reason I don't use it with my main server. I image the system drive after major changes and just plan on replacing the system SSD every 3 or 4 years. I do use the flash memory plugin with my ARM device.


    Plus 1....

  • Agreed. But unless the plugin developer is about to make that change soon, I'm going for the work-around.


    Since I'm new to OMV, could you provide instructions on how to add the cron job and the syntax/command. Can it be done directly from the WebGUI?


    Thank you.

    • Offizieller Beitrag

    Agreed. But unless the plugin developer is about to make that change soon, I'm going for the work-around.


    Nope, not adding it because scheduled tasks is easy. Seems like the majority of people are using the plugin to extend flash life and aren't worried about logs (includes me). I also use battery backup so pretty much the only way I would lose any files is hardware failure.


    As for folder2ram -syncall, it isn't the only option. I doubt you need every file it syncs. So, you could just add scheduled task to rsync particular log files/directories to a data drive. This would fix two issues... extra writes to flash (even if your slc/ssd can handle it) and keep you from losing the valuable log files (even in case of flash media failure which no seems to think about).

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thank you for your comments. I will take a look at the cron / scheduler and see where that gets me.


    Seems like the majority of people are using the plugin to extend flash life and aren't worried about logs (includes me)


    Well... Perhaps a better alternative to the Flashmemory plugin would be a simple option to disable logging?


    As for folder2ram -syncall, it isn't the only option


    This option is fine and simple enough for me to try for now. I don't intend to run it more than once a day (I'm not THAT concerned about log files either, but they are nice to have). This is a huge improvement compared to doing nothing, and brings the writes down to a level that is easily acceptable by any SSD.

    • Offizieller Beitrag

    Well... Perhaps a better alternative to the Flashmemory plugin would be a simple option to disable logging?


    It has been asked for before. Difficult to do since a lot of logging is enabled/disable by the individual package (some OMV doesn't even control). You can disable statistics in the System -> Monitoring tab. This will reduce a good amount of writes.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    What about moving default log location of your important logs. Is this possible for most logs? Move it off the SSD to a rotational drive.


    You could probably use a symlink or add an fstab entry to move the specific log directory to a different drive.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Please note, the plugin was written for installing OMV inside a USB flash drive or SD card, they have a single or at most a double flash chip.
    So even if they try hard (most controllers nowadays do wear leveling), they will still fail pretty soon under heavy write activity.


    SSDs (even normal cheapo ones) have plenty of flash chips and their wear-leveling has enough of them.
    First-second-third gen ones had less endurance so it was a factor, nowadays... it's high enough to be not relevant unless you are using them as fast cache for ZFS or someting.


    Industrial-grade ones even more so. SLC cells are supposed to last 10 times as much as consumer-grade MLC chips, if not more, and the average consumer SSD is not going to die of writes in less than 10 years or so (much sooner if used as cache, of course).


    If you are worried by the writes, just go and disable the option in System --> Monitoring as said by Ryecoaaron,


    It will disable the pretty usage graphs of cpu and stuff you can see (but most don't know-need-use), that is basically the only source of writes that can worry you. It was doing something like a few GB per day or something like that just to update fixed-size databases (basically rewriting the same files with new data). Nothing else combined in OMV (or Debian) gets anywhere near this GB-per-day of writes.


    Then don't install Flashmemory plugin. A few logs, the package cache and the other nuisances left are not going to kill a SSD in a million years.


    Zitat

    What about moving default log location of your important logs.
    Is this possible for most logs? Move it off the SSD to a rotational
    drive.

    It's on the list, yes. as long as it is a folder it is doable, for each log file... not that much, many logfiles are gzipped every now and then, I think symlinks will break that.
    For now the little free time I have was spent to get systemd to like my script, since I know little of it, and OMV 3.0 is (finally) using systemd.


    I'd like to promote a bit another guy that basically rewrote folder2ram (the thing doing the leg work in this plugin) in Perl and his version has already all features I wanted to add, if you want to try that out and give him some cookies... https://github.com/Reed97123/dir2ram

  • It will disable the pretty usage graphs of cpu and stuff you can see (but most don't know-need-use), that is basically the only source of writes that can worry you


    I do care about statistics, and those pretty graphs are actually the ones I don't want to loose in case of a power failure.


    Another question that comes to my mind. I'm a total Linux newbie, but as far as I understand folder2ram is uses tmpfs which has a fixed size. So, how big is this 'ram-drive' that is created, and what happens if the log files (and whatever else is stored there) grows bigger than that?

    • Offizieller Beitrag

    With tmpfs, when the memory it is allocated is full, it writes to the disk (swap file). If you fill the swap file, you might crash.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Is there a way to check how much of the allocated memory is used?


    Debian defaults to 50% of system memory. Use df -h to double check.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Umm, tmpfs isn't the average windows ramdisk.
    Tmpfs can grow until it reaches 50% of total ram but it occupies only the space needed by the files you put in it, it does not waste space like most ramdisks.


    Anyways, I'll see if I can get more useful things in the plugin UI like a table with space occupied and and the auto-synch option.

  • ...SSDs (even normal cheapo ones) have plenty of flash chips and their wear-leveling has enough of them. First-second-third gen ones had less endurance so it was a factor, nowadays... it's high enough to be not relevant unless you are using them as fast cache for ZFS or someting...


    I purchase a lot of SSDs for various projects, and try to keep up on developments. As Bobafetthotmail says, the technology has significantly improved. Most recently manufactured SSDs can write gigabytes worth of data a day for years, maybe decades, before they fail from excessive wear. I read recently of a test of 6 different (now 18 month old) drives that were forced to write constantly till they all failed. The first went at 700TB and the last at 2.5PB.


    I know all OMV installs are different, but does anyone have a sense of just how much writing goes on? If it turns out that its in the megabytes, or even a gigabyte, in a day and I use a dive with a lifespan rated in the 100's of Terrabytes, then I'm thinking I won't need the flash plugin for it.


    I DEFINITELY use if for the box I'm running off an SD card though! (thanks ryceoaaron for that!)


    I agree with you. I would love to have the option to choose how often writes are done and a field in the plugin to assign it. It is the reason I don't use it with my main server. I image the system drive after major changes and just plan on replacing the system SSD every 3 or 4 years. I do use the flash memory plugin with my ARM device. Plus 1....


    So this string got me wondering, what if, instead of planning to replace the drive or engage in workarounds to reduce writes, you used a larger capacity drive? One with more "room" to level the writes onto? Depending on where I shop, a 120GB drive is usually only 10-20% more than a 60GB drive, and 240 is only a little more than 120. I doubt doubling capacity will double lifespan, but it should extend it quite a bit.

  • the thread where we discussed things before making the plugin is here [Tutorial][Experimental][Third-party Plugin available]Reducing OMV's disk writes, also to install it on USB flash
    someone says also how to log the amount of writes in the first 2-3 pages.


    Zitat

    So this string got me wondering, what if, instead of planning to replace
    the drive or engage in workarounds to reduce writes, you used a larger
    capacity drive?

    Yes, that would work fine for most mid-to-high-end rigs. As I said, the main reason for the plugin was allowing OMV to run without wasting a Sata port and/or inside low-end devices where you have 1-2 Sata and a few USB/SD cards total. (I'm not buying a SSD for my NSA325v2, nor is ryecoaaron buying one for his raspi/bananapi/whatever other board)


    On most decent x86 rigs, you can probably use one of the USB 3.0 ports with a usb 3.0 -to-whatever (Sata or mSata or NGFF) to not waste a Sata port with the SSD you keep the OS into, and do without that plugin alltogether.


    Zitat

    I doubt doubling capacity will double lifespan, but it should extend it quite a bit.

    Well, unless something is seriously wrong in the controller, lifespan depends directly from capacity.


    A twice as big drive should theoretically last twice as long (if we are talking only of failures related to flash writes, of course).


    Still, look up benchmarks, they aren't hard to find, and last time I looked the SSDs were already lasting 50 TBs or so.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!