To what extent is more RAM useful?

  • Greetings!


    Looking to build out a new OMV 6 system and I was wondering to what extent high capacity RAM will help my system, if any.


    NAS usecase:

    -- Primarily used for Plex media. Large block, sequential reads; writing 10GB-150GB per day

    -- Secondarily, this system will provide some backup storage for Proxmox VMs and a couple of linux computers

    -- I will do some limited photo storage/sharing with this as well


    Build (don't laugh):

    -- Supermicro X10 motherboard

    -- Xeon E5-2660 v3 10 core

    -- 256GB RAM DDR-2400

    -- OS will go on one or two nvme drives, haven't decided just yet; I generally install linux to mdraid raid1 arrays when I have the drives, and I do in the instance

    -- 9300-16i HBA

    -- 20TB Exos x20 (0007D) HDDs; will start with 5 devices in a raid 6 (3D+2P) and grow one drive at a time up to 15 max (13D+2P); I am, generally, comfortable with growing mdraid arrays


    I know its overkill but its just what I have available. This system, minus the 20TB HDDs, is what I'm using for my current Proxmox server but I'm replacing it with an EPYC system. So the way-too-much RAM is just kind of coming along with it. I figure its better than it sitting on a shelf somewhere. Will it help at all for anything relating to mdraid or other OMV processing?


    Also, knowing that this is the hardware I have, I'm open to any other thoughts you may have. Please feel free to go off-topic (topic = RAM) if you have something I may not have thought about. I know rebuild/grow times will be long, but I don't care as long as my large block seqential workloads still work fine during these processes. Thanks!


    Ken

  • ryecoaaron

    Hat das Thema freigeschaltet.
  • It is not really overkill, I have similar system[s]. I mean if it is a NAS bigger storage the better. As far as RAM, I am betting you could do anything you want with 32G [some may say 16] As far as needing 256 (i have 384) though I will probably never tap into that, it is highly useful if running VM's, and I mean a lot.


    Go Big or Go Home, I am tired of the small shit.

  • It is not really overkill, I have similar system[s]. I mean if it is a NAS bigger storage the better. As far as RAM, I am betting you could do anything you want with 32G [some may say 16] As far as needing 256 (i have 384) though I will probably never tap into that, it is highly useful if running VM's, and I mean a lot.


    Go Big or Go Home, I am tired of the small shit.

    I though the same but the cost of me running in the UK will hit my bills. Ive tried to go as small as I can. running 64gb and only using 10gb might get rid of some more.

    Dell 3050 Micro, i5-6500T, 8GB Ram

    Plugins - compose, cputemp, omv-extras, sharerootfs.

    Drives - 512gb SSD Boot, 1tb nvme Data, 16TB (8tbx 2 merg) Media,

    Docker - dozzle, netdata, nginx-proxy-manager, plex, prowlarr, qbittorrentvpn, radarr, sonarr, watchtower.

    • Offizieller Beitrag

    It the ram is already in the box, since it is essentially free, I'd leave in the box. A commercial server platform is not going to be power efficient, in the majority of cases, so removing extra ram wouldn't help very much.

    On the other hand, you might consider disabling SWAP. With 256GB of ram, I can't imagine a home scenario where you would need SWAP.

  • Great comments, all around. Thanks! I hadn't thought of swap, to be honest. I'll certainly gather my resources to support that activity.


    And I get the power vs location issue. I'm in the US so we don't have to worry as much about cost of power. I know in EU/Europe its much more of a concern. My biggest concern over power costs is around air conditioning. I'm in Texas so the summer A/C use drives up the bill. My computers contribute very little, relatively speaking.

  • Well, I'm working on talking myself out of software raid. I have a raid controller and expanders on hand so I'm thinking of going with hardware raid 6 that can be expanded on the controller and then use btrfs single (btrfs for snapshots) in OMV for my filesystem. Hardware raid 6 will give me consistency checking/correction wheras mdraid will not.


    Thoughts?


    ...now what am I going to do with all that CPU and RAM? I guess I could run OMV in a VM and passthrough the raid card. Then I would have more hypervisor space to work with. That's another layer to upate though. I'm going to be working with close to 90TB of data right away - and growing - so whatever I go with needs to stick.


    Thanks!

    • Offizieller Beitrag

    Thoughts?

    If you plan to expand to 15 drives and want raid, I would definitely use hardware raid. I got rid of raid years ago at home.

    now what am I going to do with all that CPU and RAM? I guess I could run OMV in a VM and passthrough the raid card. Then I would have more hypervisor space to work with. That's another layer to upate though. I'm going to be working with close to 90TB of data right away - and growing - so whatever I go with needs to stick.

    Run OMV on the system (not as a VM) and install the kvm plugin (I do). No passthrough needed for storage.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    My biggest concern over power costs is around air conditioning. I'm in Texas so the summer A/C use drives up the bill. My computers contribute very little, relatively speaking.

    So tell me, if you're willing :) , what is your summer electric bill?

  • Last summer I hit $400 one month. This year was better as we made a few changes that affected efficiency. I think we hovered around $350 a couple of months. In particular, I'm quite impressed with the efficiency of two mini-split ACs I put in to takeover for older, different units. I've been familiar with mini-splits through my travels over the last couple decades but they haven't been popular here in the US until recently. I'm glad they are catching on...huge money-saver, imo.

  • If you plan to expand to 15 drives and want raid, I would definitely use hardware raid. I got rid of raid years ago at home.

    I've been an enterprise storage professional since 2003 so I'm right at home with hardware raid. To be fair, I usually worked on FC arrays on the vendor side of things, but LSI-variant raid cards are quite comfortable territory for me. And I just hate the idea of ZFS/others preventing me from adding one drive at a time as I need storage. The vdev expansion limitations are really pushing me away so far. I'm curious what you use, having moved away from raid. I'll admit to assuming it is ZFS, but I guess its safest to just ask. :)


    My workloads are predominantly large-block, sequential in nature and I've learned long ago that any ol' potato storage device will stream a 4k movie at 20-40Mbps easily enough; so raid 6 won't be a hinderance. And any recoveries needed down the line will be sequentially read/written as well. Ending up with a 13+2 raid6 will suit me just fine. Then I'll start another array as needed.


    Run OMV on the system (not as a VM) and install the kvm plugin (I do). No passthrough needed for storage.


    And as for the OMV as a hypervisor, well, that could work. I run everything on Proxmox right now, so I'll have to see if I can recover/convert/copy/whatever my proxmox backups to KVM if the need arises and vice versa. Since OMV is just debian under the covers, couldn't I just install Proxmox alongside OMV? Proxmox supports installs on debian, but I wonder what the devs here think of adding Proxmox to OMV... Could be interesting!

    • Offizieller Beitrag

    I've learned long ago that any ol' potato storage device will stream a 4k movie at 20-40Mbps easily enough

    That approach could fail if you have more than one client.

  • That approach could fail if you have more than one client.

    Yeah, I just meant performance-wise. I meant to imply that performance won't be much of a concern for my large-block sequential workloads on a big raid 6 on hardware. I may have up to 4 clients watching at a time, for what its worth. We're talking 20-ish MBps here...not too bad.

  • Last summer I hit $400 one month

    so about £315 here in the UK. my bills are about 300-440 GBP premonth every month with also cutting down on everything. AC? ha not a chance.


    used to pay £60-80 a month

    Dell 3050 Micro, i5-6500T, 8GB Ram

    Plugins - compose, cputemp, omv-extras, sharerootfs.

    Drives - 512gb SSD Boot, 1tb nvme Data, 16TB (8tbx 2 merg) Media,

    Docker - dozzle, netdata, nginx-proxy-manager, plex, prowlarr, qbittorrentvpn, radarr, sonarr, watchtower.

    • Offizieller Beitrag

    I'm curious what you use, having moved away from raid. I'll admit to assuming it is ZFS, but I guess its safest to just ask.

    Nope. no zfs here. Just individual ext4 disks. I do pool them with mergerfs but I don't need to. Every client in my house (mostly kodi or Linux) can handle multiple shares.

    I run everything on Proxmox right now, so I'll have to see if I can recover/convert/copy/whatever my proxmox backups to KVM if the need arises and vice versa. Since OMV is just debian under the covers, couldn't I just install Proxmox alongside OMV? Proxmox supports installs on debian, but I wonder what the devs here think of adding Proxmox to OMV... Could be interesting!

    Proxmox and OMV with kvm plugin are both KVM. There is no need to convert anything from proxmox other than the VM config. The kernel plugin even allows you to install the proxmox kernel.


    While it can be done, I recommend against installing proxmox on OMV because both OMV and proxmox will try maintaining conflicting configs. I am well aware the Proxmox has more features (been running it since 3.) but I use the kvm plugin myself and wonder why you need Proxmox?

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Nope. no zfs here. Just individual ext4 disks. I do pool them with mergerfs but I don't need to. Every client in my house (mostly kodi or Linux) can handle multiple shares.

    Nice. Yeah, any single disk can read at more than acceptable speeds for many workloads. I can see the appeal of the simpler design.


    Proxmox and OMV with kvm plugin are both KVM. There is no need to convert anything from proxmox other than the VM config. The kernel plugin even allows you to install the proxmox kernel.


    While it can be done, I recommend against installing proxmox on OMV because both OMV and proxmox will try maintaining conflicting configs. I am well aware the Proxmox has more features (been running it since 3.) but I use the kvm plugin myself and wonder why you need Proxmox?

    I use Proxmox Backup Server to do incremental-forever backups of my Proxmox VMs. It stores the incrementals as blobs and fidx files and I don't know how to mangle them back into a raw VM disk. See attached. I would love to have another proxmox server to recover to instead of plain KVM where I have to try and figure out how to get a VM disk out those blobs.



  • Nice. My main server is a Supermicro X9 w/ dual xeons and 32gb of ram. I would like to get some more ram so I don't have to worry about over provisioning in proxmox ;)

  • So I've been playing around with the LVM plugin in OMV in a VM. I've been expanding and adding virtual disks and then expanding and adding LVM PVs and LVs. Everything works great from the UI and I can immediately expand my BTRFS filesystem once the LV has more space. Its amazingly smooth and quick to do these things from the UI and all can be done without going offline for any of it. This basically simulates expanding hardware raids (the PV in my tests) online so I can non-disruptively grow my filesystem. I can then add more raids (PVs) and do this several times over the life of my filesystem.


    With these new tests I'm really leaning toward the hardware raid6 option now. I'll start with a 15-bay chassis and I will start with a 6+2 with one hot spare and expand up to 12+2 with the hot spare. This will be with the Seagate Exos X20 SATA drives for this array. I'm thinking longterm that I will go with 24-bay Supermicro JBOD enclosures (maybe retail, but likely used market...as I can get them) and just run SAS drives. I think SAS drives will serve me better in the long run with these larger raid6 arrays I'm planning to use. They can be dual-ported too! Eventually, once I have the data on SAS, I can repurpose the SATA array for backup, maybe as a raid5 or so. I will have backup to some synology arrays until the time that I get the data off SATA and onto SAS, so I WILL be backing up in the mean time. :)


    I'm curious if the devs here would consider adding vgexport/vgimport workflows into the UI for LVM. My thoughts are that if we ever run into a scenario where in-place upgrades can't be done for some future version then we can at least export the VG and then import again once the OS/Software is replaced. I can do this from command line easily enough but I'm not sure how much we can get away with outside of the OMV UI and then still have seemless management from within OMV UI. I know I can do things such as mdraid operations from command line, as an example, and then OMV sees it all just fine, but will that always be the case?


    Anyway, I'm just thinking "out loud" as I go. I think this plan is near final/complete but I always appreciate more information so keep sharing if you have anything. Thanks!

    • Offizieller Beitrag

    If you don't use partitions on your raid array (omv doesn't), you don't need LVM to expand the filesystem.


    As for vgimport/export, why do you need that? As someone who moves VGs from one VM to another with automation at work hundreds of times per day, you don't need either.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • If you don't use partitions on your raid array (omv doesn't), you don't need LVM to expand the filesystem.

    I will be adding more arrays to the LV over time. I only wish to grow any single array to so many drives and then I'll start another array to continue expanding capacity. Each array will represent a new PV in LVM.

    As for vgimport/export, why do you need that? As someone who moves VGs from one VM to another with automation at work hundreds of times per day, you don't need either.

    Just for the sake of being graceful. Its not a big deal...it just feels safer to inactivate and export the VG prior to an intentional movement of the LV from one server to another.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!