OS on Flash Memory (USB Sticks), Logs on HDD?

    • OMV 4.x
    • OS on Flash Memory (USB Sticks), Logs on HDD?

      After some thought and playing around with my pfSense rig by using a thumbdrive on it in lieu of a HDD, I am thinking of doing the same to my OMV NAS when I upgrade from 3.x to 4.x.

      I am planning to:
      1. Do a fresh install, install it onto a 16GB stick that I have lying around
      2. Get omv-extras and then flashmemory plugin
      3. Set cron jobs to write logs to a HDD (which will contain files for other functions of the NAS like VMs)
      4. Get omv backup plugin to write backup of the OMV OS to the HDD too

      I am wondering the following:
      1. Is this a viable setup? (As in, does this even make sense?)
      2. Should I consider using another 16GB stick too? (So that I will be using 2 sticks instead of just one; I am even wondering about RAIDing them...)
      3. How do I restore the OMV OS and logs in case things happen? (I saw the backup plugin in OMV 3 too, but am unsure how to restore from its backup)
      4. What guides should I consult to help me understand what I can do better?
      5. How much storage space do I need? (Need to know as I need to figure out if I really have to buy a 32GB stick for this instead of just reusing the 16GB sticks I have)

      Thanks!
    • darkarn wrote:

      5. How much storage space do I need?
      For your OS?
      16GB should be enough. Depends of course on your use case. For example if you use Plex and place the database on the OS drive you might run out of space if you have a large library. But this can be avoided by placing the database on another drive.
      Docker container also need some considerable amount of space.
      Any way 16GB should be a good starting point.


      darkarn wrote:

      3. Set cron jobs to write logs to a HDD
      Why do you want to do this?
      Odroid HC2 - armbian - Seagate ST4000DM004 - OMV4.x
      Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - Intenso SSD 120GB - OMV4.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV4 Documentation - user guide :!:
    • darkarn wrote:

      16GB stick that I have lying around

      Most important task before putting anything on flash storage: test the device: f3 h2testw site:forum.openmediavault.org

      The larger the pendrive, the later it will wear out. But if you use flashmemory plugin this shouldn't be an issue. OMV without any plugins that want to write to the rootfs is fine with well below 4 GB.

      I personally run all small servers from pendrives or better SD cards (on the latter I can TRIM from time to time which is really great for various reasons) and do some 'backups' from time by powering the device down and then cloning the boot media (in my case not to another media but to my backup server that utilizes btrfs and snapshots so at the filesystem layer I always create a new and large image but in reality due to transparent filesystem compression and snapshots I get compressed and incremental backups. TL;DR: use another pendrive and clone the first from time to time, this way you can also easily test upgrades)
    • macom wrote:

      darkarn wrote:

      5. How much storage space do I need?
      For your OS?16GB should be enough. Depends of course on your use case. For example if you use Plex and place the database on the OS drive you might run out of space if you have a large library. But this can be avoided by placing the database on another drive.
      Docker container also need some considerable amount of space.
      Any way 16GB should be a good starting point.


      darkarn wrote:

      3. Set cron jobs to write logs to a HDD
      Why do you want to do this?

      Yes, this pendrive is mainly for OMV OS and plugins. For Plex database (if I am ever using it), Docker and VMs, they will be on a separate HDD

      I want to have logs on the HDD just in case the pendrive breaks down and I need to read the logs to show me what else has gone wrong

      tkaiser wrote:

      darkarn wrote:

      16GB stick that I have lying around
      Most important task before putting anything on flash storage: test the device: f3 h2testw site:forum.openmediavault.org

      The larger the pendrive, the later it will wear out. But if you use flashmemory plugin this shouldn't be an issue. OMV without any plugins that want to write to the rootfs is fine with well below 4 GB.

      I personally run all small servers from pendrives or better SD cards (on the latter I can TRIM from time to time which is really great for various reasons) and do some 'backups' from time by powering the device down and then cloning the boot media (in my case not to another media but to my backup server that utilizes btrfs and snapshots so at the filesystem layer I always create a new and large image but in reality due to transparent filesystem compression and snapshots I get compressed and incremental backups. TL;DR: use another pendrive and clone the first from time to time, this way you can also easily test upgrades)

      Good idea, thanks! As for imaging the pendrive, is there a way to do it as the OMV NAS operates? Or do I need to schedule downtime and do it using Clonzilla (or similar software)?
    • darkarn wrote:

      is there a way to do it as the OMV NAS operates?

      In my opinion not. Disclaimer: Part of my day job is helping customers to recover from 'live cloning gone wrong'. This works +99% but I would never do it myself (due to analyzing what could go wrong -- open files and stuff like that) and so I only deal with the remaining 1% that horribly fails (mostly in a way not immediately visible).
    • Not sure if it was prepared. Just used the backup plugin and selected fsarchiver. Seems it worked. But might be that I did not find out limitations, yet.

      fsarchiver.org/live-backup/
      Odroid HC2 - armbian - Seagate ST4000DM004 - OMV4.x
      Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - Intenso SSD 120GB - OMV4.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV4 Documentation - user guide :!:
    • macom wrote:

      Seems it worked

      Yep, 'seems' it worked. If problems occur you realize them way too late.

      It's really that easy:
      • open files are a problem (you can't save them appropriately)
      • the time such a 'backup' needs is a problem (if it takes more than 0.1 seconds inconsistencies might be the result)
      • applications that are not aware that such a 'live backup' is about to be started have no chance to save their data/environment in a consistent state
      For such a 'live backup' you need to be able to freeze the filesystem (impossible without snapshots) and you also need a mechanism how to tell applications to save their data in consistent state prior to the live cloning happening.

      To get the idea just do a web search for Microsoft's shadow copy service. Something like this (and especially applications that agree on the OS mechanism to need app data in a consistent state) is needed. Otherwise results are unpredictable and usually you realize that something went wrong way too late.

      There's a reason filesystems that have been invented in this century provide features like snapshots. Since this is essential to 'freeze' the filesystem in a consistent state which is most basic requirement to do a 'live' copy/backup/clone. And still the problem is how you get applications to reliebably save their data (open files -- think about databases).

      Simple excercise: check open files manually:

      Source Code

      1. lsof / | awk 'NR==1 || $4~/[0-9][uw]/' | grep -v "^COMMAND"
      If important files are amongst the list expect a live 'backup' failing.
    • tkaiser wrote:

      macom wrote:

      Seems it worked
      Yep, 'seems' it worked. If problems occur you realize them way too late.

      It's really that easy:
      • open files are a problem (you can't save them appropriately)
      • the time such a 'backup' needs is a problem (if it takes more than 0.1 seconds inconsistencies might be the result)
      • applications that are not aware that such a 'live backup' is about to be started have no chance to save their data/environment in a consistent state
      For such a 'live backup' you need to be able to freeze the filesystem (impossible without snapshots) and you also need a mechanism how to tell applications to save their data in consistent state prior to the live cloning happening.

      To get the idea just do a web search for Microsoft's shadow copy service. Something like this (and especially applications that agree on the OS mechanism to need app data in a consistent state) is needed. Otherwise results are unpredictable and usually you realize that something went wrong way too late.

      There's a reason filesystems that have been invented in this century provide features like snapshots. Since this is essential to 'freeze' the filesystem in a consistent state which is most basic requirement to do a 'live' copy/backup/clone. And still the problem is how you get applications to reliebably save their data (open files -- think about databases).

      Simple excercise: check open files manually:

      Source Code

      1. lsof / | awk 'NR==1 || $4~/[0-9][uw]/' | grep -v "^COMMAND"
      If important files are amongst the list expect a live 'backup' failing.

      Hmm, good point, I missed out the part about applications being open during the live backup; a simple OMV setup may be ok with live backups while a more complicated one with many plugins open at once will be tougher to manage right?

      I guess unless there is a way to run OMV on a ZFS system like how pfSense allows you to install it on a ZFS filesystem during install, it is still better to simply schedule downtime and do things the "hard" way (i.e. manual backups and testing of such backups)?