Help Me Please! Proxmox and OMV Raid

  • Hi, I'm a new user and sorry for my English.
    I have a doubt. I am installing the Proxmox virtual server on my Asrock j5005 motherboard with 32gb Ram. My machine has an SSD where I install PROXMOX and virtual machines including OMV. My doubt is it better to install OMV in virtual machine or is there an LXC version? I also have four hard drives of 8 TB each, which raid configuration is better: Raid ZFS in Promox and then access to OMV or RAID (BTRFS) in OMV directly? When answering you consider that I would prefer BTFRS which is lighter. Does anyone know if it is possible to BTFS raid in PROXMOX? 8| Thanks

    • Offizieller Beitrag

    My doubt is it better to install OMV in virtual machine or is there an LXC version?

    kvm, lxc won't display any block devices, thus making it unusable.


    The rest of the question depends on your needs. Passing directly the disks won't give you the benefit of SMART monitoring or display in OMV for example, so you have to do it in proxmox. One thing is i don't see the benefit in doing proxmox raid zfs then using the zvols for omv and then use btrfs on those, any snapshots or rollback options are already available in proxmox.

    • Offizieller Beitrag

    One thing is i don't see the benefit in doing proxmox raid zfs then using the zvols for omv and then use btrfs on those, any snapshots or rollback options are already available in proxmox.

    I agree.


    Given the symbolic links that are used in snapshots, running a CoW filesystem inside of another CoW filesystem (which is snapshot capable) may be asking for trouble. There's a limit to the levels of symbolic links allowed, where a second set of symbolic links and deeply nested files may push beyond the limit.


    If ZFS is used in Proxmox, it might be wise to use a simple filesystem in the VM (EXT4 or XFS).

  • Thanks for the reply crashtest and subzero79 .


    Ah OK. So using OMV in LCX I won't be able to access the hard drives, right? Ok so I go to KMV.


    For the other question maybe I didn't explain myself well.


    There are three options:
    (First option)
    Install PROXMOX on SSD (512Gb) with two partitions; first partition install proxmox, second partition install KVM. No raid on any other HD. When I install KVM OMV I create BTRFS Raid 6 on 4x8TB HD.


    (Second option)
    Install PROXMOX on SSD (512Gb) with two partitions; first partition install proxmox, second partition install KVM. And I create ZFS Raidz2 on 4x8TB HD and I make use of the space at OMV with ext4.


    (Third Option)
    Install PROXMOX on SSD (60Gb) with only one partition; And I create ZFS Raidz2 on 4x8TB HD and install the KVMs here and of course my OMV data with with ext4.



    My request is due to the fact that I would not use zfs because of the resources (1 GB of RAM for each TB of Hard Disk). If you choose the first option, use btrfs for raid and I would have no RAM problems.


    Now or a motherboard asrock j5005 with 32gb ram no-ecc.
    Sorry for my English. :rolleyes:

    • Offizieller Beitrag

    Sorry for the delayed response.

    Sorry for my English.

    Your English is better than my Google translations. :)
    __________________________________________________


    The first thing to note is that nesting one COW filesystem (BTRFS), inside another (ZFS) might work, but I have doubts. Performance would likely be poor, and there's a limit to the level's of symlinks allowed, resulting from snapshots. A setup like that might run into both.
    With ZFS running at the top (Proxmox), guests should be setup with a simple file system (like EXT4 or XFS). Since Proxmox has nice features for guest cloning, snapshots, backup and restoration, that's where I'd be taking care of those issues.


    Second, the following is just an opinion.

    (Second option)
    Install PROXMOX on SSD (512Gb) with two partitions; first partition install proxmox, second partition install KVM. And I create ZFS Raidz2 on 4x8TB HD and I make use of the space at OMV with ext4.

    I believe the above (option 2) makes the most sense but I might adjust the partitions with 100 to 120GB in the first partition and the rest in the second. Why? Proxmox is similar to OMV in that it doesn't need a huge boot drive. ((For a short time, I tested Promox using a 32GB thumbdrive as a boot drive. This is not recommended but the thing to note is that "32GB" worked fine in limited testing.)) 100 to 120GB will allow for a huge swapfile, which might be 64GB in your case, with plenty of room to run Proxmox.


    The remaining 400Gb +/- could be used as a EXT4 formatted utility partition/drive. I've always used this approach. Recently, I found that Docker containers and ZFS may not get along. Using this partition to give Docker containers a home, storing ISO image files to create client guests, for KVM and other utility uses makes sense to me.
    _________________________________________________


    There may be better ways to set this up. Perhaps someone else will chime in with better ideas.


    Hope this helps.

  • Given the symbolic links that are used in snapshots, running a CoW filesystem inside of another CoW filesystem (which is snapshot capable) may be asking for trouble. There's a limit to the levels of symbolic links allowed, where a second set of symbolic links and deeply nested files may push beyond the limit

    The usual insane @crashtest BS (as almost always unfortunately). You have not the slightest idea what you're babbling about. Snapshots in ZFS and btrfs are not based on 'symbolic links' but are a result of both filesystems being designed as CoW (copy on write). Most probably you confuse this with the way rsnapshot tries to implement snapshotting: here hardlinks are used regardless of the underlying filesystem (hardlinks, not 'symbolic links').


    The first thing to note is that nesting one COW filesystem (BTRFS), inside another (ZFS) might work, but I have doubts

    Why should doubts of someone who doesn't even understand the meaning of the words he's throwing around matter?


    Performance would likely be poor,

    Complete and utter BS as almost always. This 'thread' over there you polluted as @flmaxey with all your 'doubts' showed a simple performance comparison of ext4 vs. btrfs running off ZFS shared over NFS: https://github.com/openmediava…01#issuecomment-468270197

    The 'ok-ish setup using SMB with 10GbE' shows 370/495 MB/s write/read for btrfs and only 290/475 MB/s for ext4 (over the network using SMB with storage from a ZFS filer exported via NFS).


    and there's a limit to the level's of symlinks allowed, resulting from snapshots

    BS. No symlinks involved with either ZFS or btrfs snapshots. When do you start to learn at least the basics instead of flooding this forum with idiotic theories (the result of lacking knowledge and methodology)?

    My request is due to the fact that I would not use zfs because of the resources (1 GB of RAM for each TB of Hard Disk)

    There is not such thing like '1 GB of RAM for each TB of Hard Disk'. That's an urban myth. You only need huge amounts of RAM with ZFS if you enable deduplication and have tons of files in your zpools.


    This is one of our ZFS Linux filers with 16 x 6TB disks and 64GB RAM (dedup used):



    RAM used with ZoL 0.7.5 is well below 40GB but 30GB of this are used for the ARC (adaptive replacement cache):


    Code
    root@datengrab:~# zpool list
    NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
    riesenpool    87T  16.4T  70.6T         -    11%    18%  4.97x  ONLINE  -
    root@datengrab:~# arcstat 
        time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c  
    15:38:35     0     0      0     0    0     0    0     0    0    30G   30G  
    root@datengrab:~# modinfo zfs | ^grep version
    version:        0.7.5-1ubuntu16.4

    This install would happily run the almost 90T RAIDz3 with activated dedup and just 8GB of RAM.

    • Offizieller Beitrag

    OK, axxwipe, I see you're at it again. So what's this all "snapshot/symbolic links" about on githut? Here's another. Oh, that couldn't be true, uh? Everyone must be wrong, right? There's no problem there, no symlinks involved in snapshots - everyone must "imagining" the symbolic link error.


    Now let's talk about COMMON SENSE which you don't appear have; we're also talking about home users and avoiding potential issues which is easy if using some (common sense). Again, this isn't about you, your little graphs you create when you're bored, or your crappy little monotonous job in some dark little corner of a Munich data center. It's about home users. Adding unneeded layers of complexity makes no COMMON SENSE at all but, as it appears, you can't grasp that concept.


    And your other "valuable information" from a data center has next to nothing to do with setting up Proxmox. It's just blathering BS from someone with an impulsive need to lecture, that doesn't know how. (In the one item where you addressed the user, with 32GB ram, the user will be fine. Why even mention it? - because it's the COD - you have to.)


    Why can't you simply tell the user how you would set up Proxmox. That is what this thread is about, versus breaking into a thread and talking about what I had to say about it? I'm starting to think you might have some sort of sick, odd sort of attraction to me - you know like how little girls hit boys they want attention from them on the playground. I hate to break it to, bottom line, I'm not interested. Even if I leaned that way, I'm not into pansy boys.


    Why can't you just grow up? And it you can't do that, simply "F" off.

    • Offizieller Beitrag

    While it's becoming tiresome to do so, @blobblio , I'll offer an apology for our forum "moderator" idiot. If you want to, we can take this thread into a PM. It's call a "conversation" and it's available in the upper left of the web page.


    My Sincere Regrets and,
    Regards.

  • So what's this all "snapshot/symbolic links" about on githut? Here's another. Oh, that couldn't be true, uh? Everyone must be wrong, right? There's no problem there, no symlinks involved in snapshots - everyone must "imagining" the symbolic link error.

    Since I still believe in human beings being able to stop their idiotic behavior and being able to learn I explain the basics to you one last time (like I did it the last years over and over again): What you are referring too is a symptom. One userland tool called ls reports some error message in certain situations. We're talking about symptoms and quoting your very own link 'with this version I can no longer reproduce the "Too many symbolic links" error when I attempt to "ls" the ".zfs/snapshots" directory with an absolute path' -- with your limited understanding this would mean that this user now that he got rid of the confusing ls error message (the symptom) is using now a ZFS implementation where snapshots are not based on symbolic links any more.


    Again: ZFS and btrfs do not use any sorts of links for their snapshots. It's one of the natural benefits of both being CoW filesystems.


    Your incompetence is frightening. You spread BS all over the place and make this forum a mess. And the worst part: you don't even realize since you feel encouraged to continue with your idiotic behavior :(


    Now head on to agitate further users via personal conversations... @blobblio have fun with this guy. Anyway: your 'issue with ZFS' isn't one as outlined above. Listen to the advice you got from @subzero79 (who as any other one right in his mind avoids threads once 'team incompetence' arrived)

    • Offizieller Beitrag

    You spread BS all over the place and make this forum a mess

    This caught my eye and it is NOT true.
    Users post to this forum , forum contributors answer, then you break into a thread and make a mess of it with opinion related nonsense. In many cases you force threads into PM's because, with all your explanations and splitting of atoms, threads become too dxmned confusing. You do this. No one else does it. The recent history of your posts (from today to just a week ago) is more than enough to verify it as well. We're talking about 5 to 10 of your recent posts , for crying out loud.
    You do this to myself, all the time, and other contributors like @geaves on a regular basis, making a virtual forum career out of attacking contributors and injecting your opinions . That's the mess you're talking about.


    Let's see if you can get this: YOU (no one else here) are the actual HUB of dysfunction on this forum. And YOU are the HUB of dysfunction on the Armbian forum as well. (One gets an interesting perspective if taking the time to go through the whole Armbian thread. You might have given some thought to the moderators posted link to the advice at the end.) Do you see the commonality here? The HUB, the source of all the mess as you put it, is you and you simply refuse to recognize it.
    _________________________________________________________


    You know, seriously, I'm a really patient man. But I'm not going to read yet another tangent, spinoff, rationalization, explanation, tkaiser soliloquy with links where "you think" you gain some sort of authority by linking to someone else's work or opinions. It's just a dxmn shame. If you could control your impulses, learn to deal with people, and had a just a modicum of respect of others, ("manners" as another user put it) it might be different.


    Now, please, think of others for a change. Take one (a handful?) of these and give some thought to things that actually matter - not these perceived fantasies of self righteousness.



    Done here, thread unwatched.
    As noted before, feel free to go on and on. Go ahead, get it out of your system - another "purge" seems to be required.

  • I am sorry I was the author of a dispute between people, but I just wanted some advice.In any case I decided for the second option: two partitions on the ssd and 120 (ext4 -> proxmox) and 400 (ext4-> kmv) and then 4x10tb (zfs -> ovm).Thanks for everything.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!