OMV in LXC container (Proxmox), creating a dummy drive

  • Hi there,


    I have a virtualization server based on Proxmox and I want to add NAS capabilities to it. I created a dedicated LXC container VM with Debian 8.6 (jessie) and installed OMV 3.0.57 (Erasmus) using the public APT-sources.
    The installation gave me an error at two or three install scripts that are executed at the end of the installation process. I took a look at these scripts and they were trying to alter the boot configuration (which I don't have, since this is a container virtualized system using the host kernel / boot configuration) and fetch the connected drives with blkid (which also failed, since there are no block level devices associated with the container).
    I skipped these steps with 'echo 0;' entries in these scripts and the installation finished.


    I can use the web interface, start services, create certificates, users, groups and so on. Basically everything works except for creating shares. This is true for all types (ftp, nfs, rsync, smb/cifs ...) and all are failing because of the same reason: There are no disks, no drives, no filesystems.


    What I need is a 'dummy' drive (which can be / or 'root') so that I can create shared folders and select my root FS in the dialog. I cannot select or create a drive since blkid does not return anything so I need to do this manually I guess. If this would be possible, I expect to be able to create network shares in the service section referencing this shared folder.


    I'm aware that my use case is very different from what OMV expects (which is a bare metal installation). But my virtualization server has its focus on hosting VMs for different needs and I would like to have OMV to manage my NAS services since I like the web interface (very slim, extendable and easy to use).
    This usecase may become more and more spreaded since I've seen others asking for this in a similar scenario.


    Any help would be greatly appreciated!


    Andreas

    • Offizieller Beitrag

    Use a real VM not a container. OMV has issues with a few things in containers as you have found. Why do you need a container?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hi ryecoaaron,


    thank you for your response.


    I think using a KVM VM is not a vial option to me. The host uses ZFS and its features are used extensively. Using a zvol or even worse a qcow2 disk image and formatting it with something like ext4 in the VM would show a huge performance hit, especially as the system will use rsync to do file based backups of two file servers using Rsync. Rsync is a lot faster now with the separate metadata storage that zfs offers. Another thing is that I don't want to dedicate all resources to this VM that it'll need later on right from the start. With LXC I can safely grow the filesystem while the system is running. The storage capacity of the NAS should be 24 TB and if it need to be extended there is no way of doing this safely without a full backup.
    I don't want to get into all details here on that, but you're right: The problem I ran into would be solved with a real VM. However the priority is more on reliability, performance and extensibility in this case. For a small NAS box with less load this would be fine I guess.


    If I can afford the time I may dig into this on my own, since I don't think it could be very difficult to replace the blkid calls with something that cheats OMV to let me select a folder in the "shared directories" wizard. I only need NFS, SMB/CIFS and RSYNC.
    If it turns out that it does not work or if it is too complicated, I may need to find another web based admin panel to configure the NAS services (maybe I'll give ajenti a try or stick with webmin).


    Any ideas are still welcome!


    Andreas

    • Offizieller Beitrag

    However the priority is more on reliability, performance and extensibility in this case.

    Reliability? A VM is no less reliable than a container. The performance is pretty close. Is your hardware spec'd that close that it can't handle the little bit of overhead from a VM?
    apiening wrote:

    With LXC I can safely grow the filesystem while the system is running. The storage capacity of the NAS should be 24 TB and if it need to be extended there is no way of doing this safely without a full backup.


    This just isn't true. If the guest uses LVM, it is very easy and safe to grow a filesystem. I use VMs at work (1000s) and at home (10s) and I honestly don''t see why you can't use a VM. We have to grow the filesystems on live production systems all the time.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Reliability? A VM is no less reliable than a container. The performance is pretty close. Is your hardware spec'd that close that it can't handle the little bit of overhead from a VM?

    Ok reliability was not the right term. I did not want to bash on full virtualization VMs at all, I use them since a long time and there's no way around them in some cases. And you're right for most scenarios, that the performance hit is not that big. But when it comes to storage IO, even with virtio etc. the performance difference is quite noticeable. But in my specific case it is huge! We decided to go with SSD based metadata cache on ZFS because the overall speedup syncing several TBs is quite dramatic, It took about 30% of the time it did compared to XFS. This is of course not VM vs. Container related, but I can't make use of the ZFS features like that if I use a ZVol in a KVM guest and format it with XFS. This is really not an option.



    This just isn't true. If the guest uses LVM, it is very easy and safe to grow a filesystem. I use VMs at work (1000s) and at home (10s) and I honestly don''t see why you can't use a VM. We have to grow the filesystems on live production systems all the time.

    To make sure we're talking about the same thing: If the disk gets too small, I need to shut down the KVM guest and grow the block device, right? Then I need to boot up the machine (probably with a boot cd if the root volume is affected) and grow the LVM2 container and then the filesystem (let's assume XFS).
    Is there another way I don't know of? Because this FS grow operation is not what you want to do on a production system without a full backup, right? Well this may depend on how much risk you want to take but the docs clearly tell you to not do this without a backup because the process can fail and may cause data loss. LVM is not COW and the operations are not reversible, this can't be safe.


    Please don't get me wrong: For a NAS system that is used to store documents and media files your solution would be fine. But there is a simple reason why we decided to use ZFS with this specific configuration: Compression, deduplication, checksums and meta data cache (speed). The filesystem is already there (host).
    All I need is a WEB-GUI which gives my customer the option to add users, groups, shares etc. without using the terminal. OMV is much more than that but I hope to be able to use it for my needs anyway.

    • Offizieller Beitrag

    but I can't make use of the ZFS features like that if I use a ZVol in a KVM guest and format it with XFS. This is really not an option.

    Sounds like you need a separate SAN and then use zfs on the guest.


    To make sure we're talking about the same thing: If the disk gets too small, I need to shut down the KVM guest and grow the block device, right? Then I need to boot up the machine (probably with a boot cd if the root volume is affected) and grow the LVM2 container and then the filesystem (let's assume XFS).
    Is there another way I don't know of? Because this FS grow operation is not what you want to do on a production system without a full backup, right? Well this may depend on how much risk you want to take but the docs clearly tell you to not do this without a backup because the process can fail and may cause data loss. LVM is not COW and the operations are not reversible, this can't be safe.

    No. You do not need to shutdown the VM or boot off another CD. And there is no risk. I do this on some of our most sensitive systems without thinking twice. A LVM volume group is made up of physical volumes. The logical volume is added to the volume group but LVM determines where it is located. If you want to expand the filesystem, you add another hard drive (virtual hard drive in the kvm world) as a new physical volume to the volume group. At this point, the LV which contains the filesystem doesn't even know it was added. But, the volume group as more space which allows the logical volume to be expanded. So, I use lvextend with the -r flag (resize the filesystem) to make the filesystem larger all while it is being used and online. Once it is expanded, lvm knows it can start to write to the new space if needed. LVM has been around a long time and this method is very tested and safe. LVM generally isn't striped. It is pooled so adding a drive is safe.


    Please don't get me wrong: For a NAS system that is used to store documents and media files your solution would be fine. But there is a simple reason why we decided to use ZFS with this specific configuration: Compression, deduplication, checksums and meta data cache (speed). The filesystem is already there (host).

    Once again, nothing I recommended would make a difference what type of files are stored on it and these methods are common in the enterprise environment. The reasons you are using zfs are the same reason you would need a SAN. The SAN could even use zfs for its filesystem.


    What is odd is that you only have 24TB of storage and need this kind of speed. What kind of files are you working with? What kind of drives make up the 24tb?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Sounds like you need a separate SAN and then use zfs on the guest.

    One of the reasons I like ZFS is that I get a lot of the enterprise features a SAN would offer without having to spend tens of thousands on highly specific hardware.
    So no, I don't need a SAN and I don't need any additional hardware. I just need a web based interface to manage users, groups, shares etc. for my file services on one of my VMs. My hardware and software stack is performing perfectly fine as it was chosen with care to fit the specific needs.


    No. You do not need to shutdown the VM or boot off another CD. And there is no risk. I do this on some of our most sensitive systems without thinking twice. A LVM volume group is made up of physical volumes. The logical volume is added to the volume group but LVM determines where it is located. If you want to expand the filesystem, you add another hard drive (virtual hard drive in the kvm world) as a new physical volume to the volume group. At this point, the LV which contains the filesystem doesn't even know it was added. But, the volume group as more space which allows the logical volume to be expanded. So, I use lvextend with the -r flag (resize the filesystem) to make the filesystem larger all while it is being used and online. Once it is expanded, lvm knows it can start to write to the new space if needed. LVM has been around a long time and this method is very tested and safe. LVM generally isn't striped. It is pooled so adding a drive is safe.

    Oh I see, that's of course a possible way. Thanks for the explanation.
    Nevertheless LVM is an additional piece in a absurd complicated storage stack we had for a very long time: RAID for our disks, LVM to get around storage limitations (variable partitioning/sizing) plus filesystems with plenty of options to choose from.
    A few years ago I remember myself defending this with arguments like "This stack is proven to be stable in the enterprise and it works fine". One solaris admin showed me ZFS and told me once I try ZFS I'll never go back. I'm so glad he did this and he was so right. Replacing the whole storage stack with ZFS is so much easier to maintain and it is more flexible and save while adding features that are simply not possible with legacy storage stacks.
    If you are used to ZFS you probably know what I'm talking of. If not: Feel free to give it a try it'll probably change your mind.


    Once again, nothing I recommended would make a difference what type of files are stored on it and these methods are common in the enterprise environment. The reasons you are using zfs are the same reason you would need a SAN. The SAN could even use zfs for its filesystem.

    I'm sorry but three times higher speed while doing large file repository synchronization is what you call "not a difference"? I'm replacing a system that has been build just as you propose to get rid of these limitations while staying in the budget. I have the old NAS still running and did a comparison between them and the benchmark was pretty impressive, I told you so.
    It is just absurd to claim that there is no difference while it is already proven by numbers.
    It's of course your choice to buy highly expensive SANs or use LVM or whatever you want but telling someone else who found a great working solution for his needs that he should stick with what he had before the improvements doesn't make much sense to me.


    What is odd is that you only have 24TB of storage and need this kind of speed. What kind of files are you working with? What kind of drives make up the 24tb?

    The NAS is one of several VMs that should be running on this one host. The capacity that we have predicted over a longer period of time is roughly 24 TB including a buffer, thats the storage need for this system. I don't want to 'waste' storage space by assigning it to this VM if it is never needed. It could be better used elsewhere.
    The system will replace our long term archive for Expertise documents. These are mostly office files, images, cad files etc. but a lot of them and it is continuously growing. The reason why I want the synchronization to be as efficient as possible is that it shouldn't harm the other VMs in terms of IO performance more than necessary.

    • Offizieller Beitrag

    Sorry but I am going to have to be done with this conversation. Everything I say is either angering you or wrong in your opinion. One point before I go:


    You keep referring to a SAN as expensive. OMV installed a physical box running ZFS with iscsi and/or nfs is the SAN I had in mind. Not some big commercial unit from EMC or IBM or whatever.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I know you only want to provide a solution and help me, but your proposal is quite the opposite direction of what I want. I really didn't wanted to anger you with my reaction, it is just that you completely misunderstood my point. I'm not a native speaker and have no experience with OMV and I tried to make my need clear in my first post which somehow failed. Sorry for that.


    You have to admit that I never asked for a proposal on how to design my storage stack. In fact it is already used in production and there is not even the chance that there will be anything changed on this side. Adding additional hardware or software with additional complexity and failure sources is totally out of question, no matter if it is expensive or not.


    Regarding your suggestion with the SAN: It would in fact be an option to install OMV in a KVM VM with just a small root FS and export a folder with NFS and import it from within the KVM guest and use this as the storage pool. It would use the virtualized network then, and not be limited by a wired network but I can't predict the impact on IO. This is of course not a true SAN but it would behave similar. Just an idea that came to my mind I have no idea if this is a good approach.


    Anyways, if someone has a hint on how to create a "shared folder" pointing at my FS please let me know!

    • Offizieller Beitrag

    You have to admit that I never asked for a proposal on how to design my storage stack.

    My bad for giving you an alternative when you the OMV option your were trying would never work. I like to give other options instead of just saying OMV won't work for your idea and you need something else. I fully understand where you are trying to go. I deal with hundreds of production systems daily. If your production system is working, don't change it.


    Adding additional hardware or software with additional complexity and failure sources is totally out of question, no matter if it is expensive or not.

    Adding complexity and "failure sources" is necessary to eliminate a single point of failure which you have now. You only have one "failure source" now. What do you do when it fails??

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I am doing something similar Ubuntu LXD/LXC on ZFS with OMV on Debian Wheezy. The install on Wheezy went fine AFAIK, but the OMV service is unable to find disk drives to use.


    <pre>root@ayana-angel:~# lxc list
    +--------+---------+----------------------+---------------------------------------------+------------+-----------+
    | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
    +--------+---------+----------------------+---------------------------------------------+------------+-----------+
    | bind0 | RUNNING | 192.168.1.95 (eth0) | 2601:cd:c180:3a00:216:3eff:fe09:c4ee (eth0) | PERSISTENT | 0 |
    | | | | fc00::216:3eff:fe09:c4ee (eth0) | | |
    +--------+---------+----------------------+---------------------------------------------+------------+-----------+
    | bind1 | RUNNING | 192.168.1.97 (eth0) | 2601:cd:c180:3a00:216:3eff:fe10:1f66 (eth0) | PERSISTENT | 0 |
    | | | | fc00::216:3eff:fe10:1f66 (eth0) | | |
    +--------+---------+----------------------+---------------------------------------------+------------+-----------+
    | chef0 | RUNNING | 192.168.1.103 (eth0) | 2601:cd:c180:3a00:216:3eff:fe5f:dca8 (eth0) | PERSISTENT | 0 |
    | | | | fc00::216:3eff:fe5f:dca8 (eth0) | | |
    +--------+---------+----------------------+---------------------------------------------+------------+-----------+
    | ldap0 | STOPPED | | | PERSISTENT | 0 |
    +--------+---------+----------------------+---------------------------------------------+------------+-----------+
    | mysql0 | RUNNING | 192.168.1.98 (eth0) | 2601:cd:c180:3a00:216:3eff:fee8:8158 (eth0) | PERSISTENT | 0 |
    | | | | fc00::216:3eff:fee8:8158 (eth0) | | |
    +--------+---------+----------------------+---------------------------------------------+------------+-----------+
    | nas0 | RUNNING | 192.168.1.91 (eth0) | 2601:cd:c180:3a00:216:3eff:fe7a:fc7d (eth0) | PERSISTENT | 1 |
    | | | | fc00::216:3eff:fe7a:fc7d (eth0) | | |
    +--------+---------+----------------------+---------------------------------------------+------------+-----------+
    | nas1 | RUNNING | | | PERSISTENT | 0 |
    +--------+---------+----------------------+---------------------------------------------+------------+-----------+</pre>


    The problem is that OMV cannot exist and just see what is available. OMV wants its own disks, but doesn't have any understanding of where it is being run and under what circumstances it should be demanding in. As someone who has been building data centers for decades I just found out that this OMV project even exists. OMV should try to be at least half as respectful as I am when it starts complaining and join a forum, then wait for days to get access.
    To work around the toddler mentality of wanting to own everything that OMV exhibits, I am attempting to connect a ZFS filesystem mount point to the container explicitly. This exists in /opt/shares of the host system. If this does not work, all the time and effort you have poured into this wonderful and fully featured system that I am definitely attracted to are useless, and I will do it using Chef templates, Juju Charms, and the rest by hand.


    I will not share a file as a drive. That is garbage for more reasons than one. If OMV wants to continue t exist you could easily phase out the FreeNAS project that has far inferior code by supporting this setup.


    The functionality that I am describing is only for us geeks ATM, but soon (very soon) your project will be judged by your new users as: works, doesn't work. They will be a huge group that you can take advantage of, or you will just die off. Personally, I love what you have done or I wouldn't have joined. Containers are huge, and your support for them does not exist right now.


    I will post a HOWTO if I can get your software to accept my unlimited ZFS disk space without problems.



    Thanks.

    • Offizieller Beitrag

    OMV should try to be at least half as respectful as I am when it starts complaining and join a forum, then wait for days to get access.


    The forum went into manual mode due to high spamming. As mods we have to approve all new user. Unfortunately the approving privilege is working only for two or three mods (doesn't work for me), there is an issue with the forum software IMO, but the admin hasn't being able to fix it. You just need to ask what happened, instead of demanding respect. All of us here are voluntarily and we try to help as much as we can depending on our availability.
    As for LXC, join the line complaining about the disks, the first one i recall was a user using openvz, then docker now lxc. OMV is designed to be used as full metal server or at least a VM. I would only use omv with containers to test/develop plugins or test/develop omv.

  • I just have to add my 2cents to the subzero79 post, I think for an essentially 100% free software
    the support is superb in comparison to many I have come across in years.
    considering that lots of the help comes from just couple of people involved in dev and bulk of the rest comes from other users ,the forum is , and I says it again, one of the best I have seen. not perfect, no, but very very very good.


    you had to wait couple of days to get access, big deal, I had to wait 3 days at StarWind forum to get access. I had to wait a week(!) at Proxmox forum to get access or even a response from mods.
    3 days for Korora project. list is going on and on and on.
    can't remember now but I had to bombard the admin on one of the sites 5 times(!!) in 2 weeks(!!!) to simply change password because the reset password option on the website does not work. the confirm email never comes.


    So it can be worst.


    now Complaining that something does not in a product that have not been designed to be used like you want it is silly.


    OMV first and foremost is a standalone NAS designed to be run bare metal. you can virtualize but you need to follow some rules to do this properly and pain free.
    can you actually run FreeNas in container? LXC or any other, I think not.
    why would you assume you can do it with OMV?



    now something for OP: not to knock down OMV or anything, but if you want to use LXC with NAS
    why not checkout the Turnkey Fileserver. it is available as container directly form Proxmox.
    just go download container and find the turnkey file server template.


    it may not be as NAS oriented as OMV and I am not sure if you can run additional things like Emby server etc. on it, but it has everything a good file server should, SAMBA and NFS support,
    SambaDAV and WebDAV support, a nice WebUI , a very nice web based file manager, and web based console. just load the container, bind mount the disks/folders to it and you are good.


    in addition, if you do want to use some of the functionality OMV provides, load OMV in a VM, add remote folder plugin. attach the shares from the file server and run what ever you want like that.


    the file server container will manage your data store and shares, and OMV will provide the functionality you want from it sance the storage management.
    basically speaking,
    Proxmox have lots of things that I want from OMV built in.
    like : hardware monitoring and reporting,
    email alerts on issues and even SMART right out of the box.
    add the LXC file server to it, and it have a full file server capability and disk management.
    with a nice web UI to do all of that.
    you can even install WebMin directly on the Host and have a lot of extra things that Proxmox does not. like a nice file manager, an extra hardware monitoring interface, web in uses the same services that proxmox does, to display the info. you can even forgo the file server container if you know what you are doing and load all of the services on the host directly using webmin and plugins.
    it has Samba management plugin, NFS management plugin, webDAV management plugin.
    and all of this do not interfere with Proxmox at all.
    you do need to be careful not to get into things that are managed by Proxmox, like networking and few other areas, but other than that...

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!