🧐
What Features to come with OMV 7 (What do people hope for/expect)?
-
- OMV 7.x
- ojtindrum
-
-
Though I currently use Docker container stack with tinyproxy and an opnvpn client, I would love to see an onboard in OMV plugin tinyproxy (or the likes) and an OpenVPN (with WireGuard) option together.
Something I can configure on the fly on OMV GUI and then whatever device on the LAN can just fill in the proxy info. Some things can’t use vpn directly but are fine via proxy.
-
Please sweet baby Jesus Add SSD Cache some how.
Like BCache or just do something similar to some other NAS software OSes with a SSD or set of SSDs for the write cache then somehow it transfers (movers) that data to the HDDs either on the fly, or at some designated time later. Preferably on the fly though. Make it where the SSD size is the only limitation where:
OPTION1:
PC writes to SSDs and at the same time the SSDs start transferring the data to spinning HDDs, as to not run out of SSD space (at least for a while) since some data is getting dumped to the HDDs on the fly.
But considering the price per SSD TB is only around $50-$60 (currently), having 2x 1TB SSDs is within most peoples budget.
Hypothetically, they could write around 500MB/s each, so around 1000MB/s which is approaching 10Gb ethernet speeds. (I know that there would be overhead and if it is also writing on the fly that 1000MB/s is unreasonable. But for arguments sake, lets say you get 1/2 or 3/4 of that. you would still be looking at 500-750 MB/s which sure beats the heck out of 1GbE and single HDD speeds of around (a Maximum) 130MB/s that I usually see on OMV.
And if you could add a 3rd or 4th SSD to the Cache (bringing the price point up to around $200 for those of you playing along with 4x 1TB SSD drives), I dont see why it is unreasonable to get 10Gb Ethernet speeds even with the overhead that comes along with RAID, combined drive writes, offloading data, etc..
OR
OPTION2:
In a similar scenario Just make it where the SSDs get written to from, IDK, say FROM your PCs SSD and TO the OMV SSDs (2-4TB depending on the Examples above). THEN AFTER the write is completed, the OMV SSDs offload the data to your OMV spinning drive HDDs (that I am assuming would have a higher capacity than your SSD Cache. lol... )
I mean scale this however you want to... like 1-4TB SSDs for data hoarders like me with tens of TBs available on HDDs. Or scale it down to a couple 256GB SSDs for like less than $40 for people who only have maybe 8-16TB of HDD storage total.
Either way,
Someone with 8-16TB of HDD storage wont likely be writing >500GB+ files at once.
Just as data hoarders like myself wouldn't likely be writing >4TB+ files at once.(Like as in the 4TB SSD cache example)
The math checks out in my head. lol
OF course this is all just hypothetical at this point, but it sure would be a nice feature that would set OMV7 ON the level OR ABOVE the level of the other NAS OSes.
Im not too worried about a read cache setup, but if that could be implemented somehow, that would be cool also.
My thought is to just have your SSD cache keep recent and commonly accessed files ready to access.
ALSO,
Please make the RAID interface a little more robust and user friendly/ intuitive. I feel like it lacks some options that could be in it ... like setting the time for scrubs or backup/repair/pairity drives, or creating hotswaps, or creating a repair/parity drive so if you do lose a drive, you will have one to just swap out in its place real quick and it would be ready to go already. Im not a RAID expert by any means and have barely used the feature, but from what I have seen in other NAS OSes, it just seems like the RAID section on OMV is a little weak. I could be completely wrong though. And all those features might already be available or available from the CLI. But I would love to avoid the CLI where possible.
Combine an upgraded RAID feature set with SSD Cache and you have a mean combination.
That is my two cents, no need to comment back any negative words.
I just thought I could weigh in here.
Oh, and I tried my best to keep the units correct. (sorry if I missed one)
IE. MB = MegaByte, Gb = Gigabit ... etc...
Thanks all.
-
-
Wait for bcachefs for that.
-
Wait for bcachefs for that.
Cool, didnt even know that was in the pipeline. I checked it out on bcachefs.org.. not too many detail that I can understand at my low level. but sounds good to me.
-
Please sweet baby Jesus Add SSD Cache some how.
Not sure if it's what you're looking for but I've implemented a patched together sort of cache for my media using mergerfs and a nightly scheduled script.
First I create a 'mediapool' using mergerfs for all of my HDDs.
I use Existing path - most free space policy to ensure they get evenly filled:
Then I create a second 'mediapoolcache' using the newly created mediapool and my cache SSD.
Importantly I make sure to put the SSD first, and use the first found policy:
So basically when I write anything to mediapoolcache, it will first be written to the SSD. Any access to this media is provided through the mediapoolcache reference.
Then nightly I run a (very basic) script that pauses my containers that access the data, then moves any files with a ctime of 7+ days from the cache to the mediapool directly.
Bash: sync_cache.sh
Display More#!/bin/bash # Define a timestamp function ts() { date +"[%Y.%m.%d %H:%M:%S]" } echo "-------------------------------" echo "$(ts) pausing Plex container..." docker pause plex >/dev/null 2>&1 echo "$(ts) Plex container paused" echo "-------------------------------" echo "$(ts) pausing Jellyfin container..." docker pause jellyfin >/dev/null 2>&1 echo "$(ts) Jellyfin container paused" echo "-------------------------------" echo "$(ts) Cache size before cleanup:" df -BM /srv/cache/ echo "-------------------------------" echo "$(ts) ELIGIBLE FILES:" find /srv/cache/ -type f -ctime +7 -printf "%f\n" echo "-------------------------------" find /srv/cache/ -type f -ctime +7 -printf %P\\0 | rsync -Phav --log-file=/srv/appdata/logs/sync-cache/sync_cache.log --remove-source-files --files-from=- --from0 /srv/cache/ /srv/mediapool/ echo "-------------------------------" echo "$(ts) Cache size after cleanup:" df -BM /srv/cache/ echo "-------------------------------" echo "$(ts) resuming Plex..." docker unpause plex >/dev/null 2>&1 echo "$(ts) Plex resumed" echo "-------------------------------" echo "$(ts) Resuming Jellyfin..." docker unpause jellyfin >/dev/null 2>&1 echo "$(ts) Jellyfin resumed" echo "-------------------------------" echo "$(ts) Sync complete!" echo "-------------------------------"This means that any newly downloaded media is available on the SSD for 7 days, then it's moved to the mediapool.
Because this all happens within the drives defined within mediapoolcache the operating system and applications that access the files are none the wiser...
-
-
The ability to backup your install partition easily vs rebooting to use Clonezilla or Gparted
I've been making nightly live dd images of my OMV install drive since since beginning with OMV eight years ago. I test the ability to restore these images frequently and have had no problems.
-
Not sure if it's what you're looking for but I've implemented a patched together sort of cache for my media using mergerfs and a nightly scheduled script.
This is cool, I just wish it were a little more streamlined. And I would still have many questions for you like how you get the file paths to line up and all that nitty gritty stuff. I can appreciate what you have done though. Well done. I might try it when I get a bit of free time to play with one of my systems.
-
RAID 0 support for USB 3.0 to NVME SSD's!
(Yes i'm willing to accept the RAID array may eventually need to be rebuilt on the Pi 4B, due to USB errors)
-
-
RAID 0 support for USB 3.0 to NVME SSD's!
(Yes i'm willing to accept the RAID array may eventually need to be rebuilt on the Pi 4B, due to USB errors)
I wouldn't get your hopes up for any raid support over USB. There's good reason that was specifically disabled.
-
Hope to add web support for built-in nginx reverse proxy configuration
-
Hope to add web support for built-in nginx reverse proxy configuration
use docker swag, is easy to deploy
Code
Display Moreversion: "3" services: swag: image: linuxserver/swag container_name: swag networks: my-net: cap_add: - NET_ADMIN environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ - URL=$URL - SUBDOMAINS=plex #put here subdomais comma separated - VALIDATION=http # - DNSPLUGIN=cloudflare #optional # - DUCKDNSTOKEN=<token> #optional - EMAIL=$email #optional - DHLEVEL=2048 #optional - ONLY_SUBDOMAINS=true #optional #- EXTRA_DOMAINS=<extradomains> #optional - STAGING=false #optional - MAXMINDDB_LICENSE_KEY=$MAXMINDDB_LICENSE_KEY volumes: - $ConfigPath/swag:/config ports: - 453:443 - 90:80 #optional restart: unless-stopped networks: my-net: external: true -
-
RAID 0 support for USB 3.0 to NVME SSD's!
(Yes i'm willing to accept the RAID array may eventually need to be rebuilt on the Pi 4B, due to USB errors)
Supported, NEVER
Possible, yes. But if you don't know how to do it, noone here will give you answers.
It's not advised.
-
This is something I have been thinking for a long time. I would love to be able to have a better way to upgrade from one release to another. I do know that the latest version of OMV will always be based on the latest version of Debian, but please think of users that are using older versions of OMV and will not update because of the complicated path of the upgrade that may break things.
-
This is something I have been thinking for a long time. I would love to be able to have a better way to upgrade from one release to another. I do know that the latest version of OMV will always be based on the latest version of Debian, but please think of users that are using older versions of OMV and will not update because of the complicated path of the upgrade that may break things.
Upgrading OMV, is generally fairly painless.
-
-
please think of users that are using older versions of OMV and will not update because of the complicated path of the upgrade that may break things.
OMV doesn't have enough developers to support more than one Debian release for each OMV release. But the real problem is that Debian itself will go EOL and people should not stay on unsupported versions of Debian.
-
2 Factor Authentication would be amazing
-
2 Factor Authentication would be amazing
This seems like a lot of effort for something that shouldn't be on the internet anyway. Since you are running swag, why not just put that in front of the web interface?
-
-
Hi @ryecoaaron, i understand your point of view , i was just thinking that was cool to be included in Omv .
-
2 Factor Authentication would be amazing
agree.
Participate now!
Don’t have an account yet? Register yourself now and be a part of our community!