Beiträge von apiening

    I know you only want to provide a solution and help me, but your proposal is quite the opposite direction of what I want. I really didn't wanted to anger you with my reaction, it is just that you completely misunderstood my point. I'm not a native speaker and have no experience with OMV and I tried to make my need clear in my first post which somehow failed. Sorry for that.


    You have to admit that I never asked for a proposal on how to design my storage stack. In fact it is already used in production and there is not even the chance that there will be anything changed on this side. Adding additional hardware or software with additional complexity and failure sources is totally out of question, no matter if it is expensive or not.


    Regarding your suggestion with the SAN: It would in fact be an option to install OMV in a KVM VM with just a small root FS and export a folder with NFS and import it from within the KVM guest and use this as the storage pool. It would use the virtualized network then, and not be limited by a wired network but I can't predict the impact on IO. This is of course not a true SAN but it would behave similar. Just an idea that came to my mind I have no idea if this is a good approach.


    Anyways, if someone has a hint on how to create a "shared folder" pointing at my FS please let me know!

    Sounds like you need a separate SAN and then use zfs on the guest.

    One of the reasons I like ZFS is that I get a lot of the enterprise features a SAN would offer without having to spend tens of thousands on highly specific hardware.
    So no, I don't need a SAN and I don't need any additional hardware. I just need a web based interface to manage users, groups, shares etc. for my file services on one of my VMs. My hardware and software stack is performing perfectly fine as it was chosen with care to fit the specific needs.


    No. You do not need to shutdown the VM or boot off another CD. And there is no risk. I do this on some of our most sensitive systems without thinking twice. A LVM volume group is made up of physical volumes. The logical volume is added to the volume group but LVM determines where it is located. If you want to expand the filesystem, you add another hard drive (virtual hard drive in the kvm world) as a new physical volume to the volume group. At this point, the LV which contains the filesystem doesn't even know it was added. But, the volume group as more space which allows the logical volume to be expanded. So, I use lvextend with the -r flag (resize the filesystem) to make the filesystem larger all while it is being used and online. Once it is expanded, lvm knows it can start to write to the new space if needed. LVM has been around a long time and this method is very tested and safe. LVM generally isn't striped. It is pooled so adding a drive is safe.

    Oh I see, that's of course a possible way. Thanks for the explanation.
    Nevertheless LVM is an additional piece in a absurd complicated storage stack we had for a very long time: RAID for our disks, LVM to get around storage limitations (variable partitioning/sizing) plus filesystems with plenty of options to choose from.
    A few years ago I remember myself defending this with arguments like "This stack is proven to be stable in the enterprise and it works fine". One solaris admin showed me ZFS and told me once I try ZFS I'll never go back. I'm so glad he did this and he was so right. Replacing the whole storage stack with ZFS is so much easier to maintain and it is more flexible and save while adding features that are simply not possible with legacy storage stacks.
    If you are used to ZFS you probably know what I'm talking of. If not: Feel free to give it a try it'll probably change your mind.


    Once again, nothing I recommended would make a difference what type of files are stored on it and these methods are common in the enterprise environment. The reasons you are using zfs are the same reason you would need a SAN. The SAN could even use zfs for its filesystem.

    I'm sorry but three times higher speed while doing large file repository synchronization is what you call "not a difference"? I'm replacing a system that has been build just as you propose to get rid of these limitations while staying in the budget. I have the old NAS still running and did a comparison between them and the benchmark was pretty impressive, I told you so.
    It is just absurd to claim that there is no difference while it is already proven by numbers.
    It's of course your choice to buy highly expensive SANs or use LVM or whatever you want but telling someone else who found a great working solution for his needs that he should stick with what he had before the improvements doesn't make much sense to me.


    What is odd is that you only have 24TB of storage and need this kind of speed. What kind of files are you working with? What kind of drives make up the 24tb?

    The NAS is one of several VMs that should be running on this one host. The capacity that we have predicted over a longer period of time is roughly 24 TB including a buffer, thats the storage need for this system. I don't want to 'waste' storage space by assigning it to this VM if it is never needed. It could be better used elsewhere.
    The system will replace our long term archive for Expertise documents. These are mostly office files, images, cad files etc. but a lot of them and it is continuously growing. The reason why I want the synchronization to be as efficient as possible is that it shouldn't harm the other VMs in terms of IO performance more than necessary.

    Reliability? A VM is no less reliable than a container. The performance is pretty close. Is your hardware spec'd that close that it can't handle the little bit of overhead from a VM?

    Ok reliability was not the right term. I did not want to bash on full virtualization VMs at all, I use them since a long time and there's no way around them in some cases. And you're right for most scenarios, that the performance hit is not that big. But when it comes to storage IO, even with virtio etc. the performance difference is quite noticeable. But in my specific case it is huge! We decided to go with SSD based metadata cache on ZFS because the overall speedup syncing several TBs is quite dramatic, It took about 30% of the time it did compared to XFS. This is of course not VM vs. Container related, but I can't make use of the ZFS features like that if I use a ZVol in a KVM guest and format it with XFS. This is really not an option.



    This just isn't true. If the guest uses LVM, it is very easy and safe to grow a filesystem. I use VMs at work (1000s) and at home (10s) and I honestly don''t see why you can't use a VM. We have to grow the filesystems on live production systems all the time.

    To make sure we're talking about the same thing: If the disk gets too small, I need to shut down the KVM guest and grow the block device, right? Then I need to boot up the machine (probably with a boot cd if the root volume is affected) and grow the LVM2 container and then the filesystem (let's assume XFS).
    Is there another way I don't know of? Because this FS grow operation is not what you want to do on a production system without a full backup, right? Well this may depend on how much risk you want to take but the docs clearly tell you to not do this without a backup because the process can fail and may cause data loss. LVM is not COW and the operations are not reversible, this can't be safe.


    Please don't get me wrong: For a NAS system that is used to store documents and media files your solution would be fine. But there is a simple reason why we decided to use ZFS with this specific configuration: Compression, deduplication, checksums and meta data cache (speed). The filesystem is already there (host).
    All I need is a WEB-GUI which gives my customer the option to add users, groups, shares etc. without using the terminal. OMV is much more than that but I hope to be able to use it for my needs anyway.

    Hi ryecoaaron,


    thank you for your response.


    I think using a KVM VM is not a vial option to me. The host uses ZFS and its features are used extensively. Using a zvol or even worse a qcow2 disk image and formatting it with something like ext4 in the VM would show a huge performance hit, especially as the system will use rsync to do file based backups of two file servers using Rsync. Rsync is a lot faster now with the separate metadata storage that zfs offers. Another thing is that I don't want to dedicate all resources to this VM that it'll need later on right from the start. With LXC I can safely grow the filesystem while the system is running. The storage capacity of the NAS should be 24 TB and if it need to be extended there is no way of doing this safely without a full backup.
    I don't want to get into all details here on that, but you're right: The problem I ran into would be solved with a real VM. However the priority is more on reliability, performance and extensibility in this case. For a small NAS box with less load this would be fine I guess.


    If I can afford the time I may dig into this on my own, since I don't think it could be very difficult to replace the blkid calls with something that cheats OMV to let me select a folder in the "shared directories" wizard. I only need NFS, SMB/CIFS and RSYNC.
    If it turns out that it does not work or if it is too complicated, I may need to find another web based admin panel to configure the NAS services (maybe I'll give ajenti a try or stick with webmin).


    Any ideas are still welcome!


    Andreas

    Hi there,


    I have a virtualization server based on Proxmox and I want to add NAS capabilities to it. I created a dedicated LXC container VM with Debian 8.6 (jessie) and installed OMV 3.0.57 (Erasmus) using the public APT-sources.
    The installation gave me an error at two or three install scripts that are executed at the end of the installation process. I took a look at these scripts and they were trying to alter the boot configuration (which I don't have, since this is a container virtualized system using the host kernel / boot configuration) and fetch the connected drives with blkid (which also failed, since there are no block level devices associated with the container).
    I skipped these steps with 'echo 0;' entries in these scripts and the installation finished.


    I can use the web interface, start services, create certificates, users, groups and so on. Basically everything works except for creating shares. This is true for all types (ftp, nfs, rsync, smb/cifs ...) and all are failing because of the same reason: There are no disks, no drives, no filesystems.


    What I need is a 'dummy' drive (which can be / or 'root') so that I can create shared folders and select my root FS in the dialog. I cannot select or create a drive since blkid does not return anything so I need to do this manually I guess. If this would be possible, I expect to be able to create network shares in the service section referencing this shared folder.


    I'm aware that my use case is very different from what OMV expects (which is a bare metal installation). But my virtualization server has its focus on hosting VMs for different needs and I would like to have OMV to manage my NAS services since I like the web interface (very slim, extendable and easy to use).
    This usecase may become more and more spreaded since I've seen others asking for this in a similar scenario.


    Any help would be greatly appreciated!


    Andreas