Posts by Belokan

    Hello,


    I've finished a storage migration on my 2 OMV5 instances (VMs under Proxmox with pass-through disks) from various RAID flavors on several 2TB HDDs to few un-mirrored SSDs.


    Method used:


    0.- Update OMV to latest

    1.- Attach SSD to the OMV VM

    2.- Wipe the SSD

    3.- Encrypt the SSD (this was is an extra step)

    4.- Create a single filesystem on the SSD

    5.- Initial copy (using rsync CLI) of the content of shared folders from different FS (RAIDs) to the new FS (SSD) (pass#1)

    6.- Stop all clients, and reboot OMV5 VM

    7.- Update copy (using rsync CLI) of the content of shared folders from different FS (RAIDs) to the new FS (SSD) (pass#2)

    8.- Move the Shared Folders using the GUI from old FS to the new one.

    9.- Stop/Start services (CIFS/NFS/RSYNC) or reboot

    10.- Restart clients

    11.- Delete & umount old FS, delete/wipe old disks & detach HDD from the VM.


    Everything went smoothly with a very little downtime on clients (steps 6 - 10) but each time I've moved shared folders with NFS share (all went OK for CIFS/rsync/FTP only shared folders) it didn't reconfigure NFS correctly. In fact, the bind mounts under /export in /etc/fstab were not rewritten with the new FS mount point after the move + config update. I've not moved the shared folders all at the same time but one after the other applying config each time.


    As I'm not salt expert, I ended by removing each NFS share, checking that the bind mount was removed in fstab before recreating the share -> valid fstab config.


    So I guess there's a bug/missing action when moving a shared folder with NFS share configured from one FS to another in the GUI.


    Cheers

    Hello macom,


    Yes I did try already but:


    Error

    id: The value "OMV.data.Store.ImplicitModel-ext-8263-1" does not match exactly one schema of [{"type":"string","format":"fsuuid"},{"type":"string","format":"devicefile"}].


    I can't figure out where this comes from:

    Code
    <fsname>/srv/dev-disk-by-uuid-76f46ef6-15f3-4a18-b8d4-5f9c791975bd/nfsproxmox</fsname>
    <dir>/srv/dev-disk-by-uuid-76f46ef6-15f3-4a18-b8d4-5f9c791975bd</dir>
    <type>ext4</type>

    Because nfsproxmox was indeed a FS before the migration, located on a single HDD and named (if I remember correctly) /srv/dev-disk-by-label-nfsproxmox.


    As I've not played around editing fstab or config.xml except to try to remove the <mntent> block, is it possible that the "search & replace" procedure of the GUI used to modify config.xml when moving a shared folder from one FS to one other faces a bug when the origin FS and the shared folder names are identical ?

    Hello guys,


    I've completed the storage migration of one of my OMV5 instances. I moved from 4 HDDs (2x in R1, 2x basic) and 3 filesystems to a single encrypted SSD.

    I moved one FS after the other with the following procedure:


    1.- Rsync copy of shared folders from HDD to SSD (pass#1)

    2.- Stop clients (NFS & CIFS)

    3.- Rsync copy of shared folders from HDD to SSD (pass#2)

    4.- Move shared folders to the new FS using the GUI

    5.- Restart the clients and test

    6.- Umount/Delete old FS & wipe old disks


    Everything went fine (not really with NFS but with CIFS at least) for the first 2 FS. But same procedure with the last FS ended up with a missing FS in the GUI.

    Everything but this missing FS works fine after a reboot but I'd like to git rid of it. I've tried to remove the <mntent> bloc (14 to 23) in config.xml which was not present it /etc/fstab but it was a mess after the reboot, it ended up with 2 missing FS including the valid one this time.


    I can't find what is wrong, I guess something obvious is just in front of my eyes but I need someone to point me the issue ;)





    Thanks a lot in advance !


    Olivier

    Extra question:


    On the 1st OMV I'm migrating right now, the "physical" disks are as follow:


    /dev/vda (boot)

    /dev/vd{b,c,d} = HDD

    /dev/vde = SSD


    I've encrypted /dev/vde (LUKS plugin), created a new FS and, after rsync copy, I'll move the shared folders to the new FS.

    What will happen when I'll detach the HDDs from the VM ? I guess the SSD will become /dev/vdb. Will the FS & shared folders fine with that ? I've also created an entry like "vde-crypt UUID /key/file" in /etc/crypttab, I guess I'll have to adapt it with the new /dev/vdX.


    Is it possible/better to use UDEV rule to force the SSD to /dev/vde ? If yes, what kind of rule should I write ?


    Thanks !

    OMV "out of the box" is a NAS frontend where DSM tends to become a full OS including/adding more and more functionalities.

    The NAS purpose is to host and serve files to clients, not to administer/manage the files themselves.


    If you want such functionality implemented, you should find a web-based file-manager, there's plenty of them around. Just take care about the fact most of them once installed will listen on default http (80) port, same as OMV GUI does. So switch OMV to https before or make sure you can configure the chosen file-manager to listen to a different port.

    Hello,


    I'm doing a storage migration on my 2 OMV5 instances (VMs under Proxmox with pass-through disks) from various RAID flavors on several 2TB HDD to few unmirrored "big" SSD.


    Here's the plan:


    1.- Attached SSD to the VM

    2.- Wipe the SSD

    3.- Encrypt the SSD (this is an extra, I've successfully tested the config within a test instance with unlock using NFS located file key).

    4.- Create a single filesystem on the SSD

    5.- Stop all clients, and reboot OMV5 VM

    6.- Copy using rsync CLI the content of share folders from different FS (RAIDs) to the new FS (SSD)

    7.- Move the Shared Folders using the GUI

    8.- Stop/Start services (CIFS/NFS/RSYNC) or reboot

    9.- Restart clients

    10.- Delete & umount old FS, delete/wipe old disks & detach HDD from the VM.


    I've already tested steps #5 to #9 because I needed to free some SATA connection for the SSD on the host but if everything went smooth with CIFS it was not successful regarding NFS. The /export/share1name and /export/share2name were empty. No error message, nothing wrong but I ended deleting the NFS shares (GUI) and recreated them exactly the same to make them working again. /etc/exports was just fine, nfs-server service was not complaining, clients were able to mount the "empty-root-located-shares" ...


    Any idea on what I missed ?


    Thanks a lot in advance


    Olivier

    Hello,


    I'm back with a Xeon and with my 4 SATA disks directly attached to the VM.
    As the R5 (on the PVE node) was hosting both BOOT and DATA disks, I've moved the BOOT disk to CEPH and copied as much datas as possible from the DATA qcow2 to the R5 before destroying the DATA disk (that was using already about 60% of the R5).


    I'm OK with performances now as they're far above my simple 1Gbits that serves the OMV instance.


    Question: I've moved to Xeon because VT-d was not available on my i3, but I've seen some threads on PM forum where people were able to run VMs with SATA pass-through without VT-d ... Any idea on what is the status on that ? Does it just not work without VT-d or is it more "virtualized" or ???


    Thanks !

    I may have found a Xeon to replace the i3 in my Gen8. I'll gain VT-d and then I'll be able to pass-through the R5 disks "as it is" directly to the VM right ?

    Here is a test from host's boot disk (SSD) to host's R5 (4x2To SATA):


    root@pve3:/tmp# rsync --progress 2gb.file /var/lib/vz/test/
    2gb.file
    2,007,521,280 100% 198.95MB/s 0:00:09 (xfr#1, to-chk=0/1)


    This is just after a reboot and "2gb.file" is a random Linux ISO copied/renamed for the test. So no cache or nothing ... And performance looks pretty good to me for an entry level server.
    So the problem is definitively not on the hosts's I/O. But as I've tested several VMs (not only the OMV one) with same issue there's only Proxmox left in the middle right ?


    Have a nice day.

    Hi,


    I've created a tests dedicated VM in order to avoid to stop/start my OMV and provisioned it with different disks (raw, qcow2 and vmdk) and vmdk put aside (very bad perf), there's no big difference raw Vs qcow2.
    I've removed the barrier (barrier=0) in the host's fstab for the local storage and it seems performance is a bit more stable.


    I'm doing some tests tuning vm.dirty* ratios but not sure if I should modify the host, the guest or both ... We'll see.


    I can't see what the impact of a soft R5 on the host (which works perfectly so far performance speaking) has a such big impact on the guests ... Is that why mdadm is not supported by Proxmox ?

    Hello,


    I've installed qemu-guest-agent and enabled it in VM options. I've changed the vCPU type from default to Host with not a big change ...
    So far I'm not able to "break" the R5 in order to test on a single disk, but my benchmarks made on the host itself showed that performances were more than satisfying.


    Olivier


    EDIT: Should I get different performance replacing qcow2 by RAW ?
    EDIT2: I've just mounted a VM's NFS share on the host and ran a tar backup to the R5 and the OMV's GUI showed about 100% CPU usage during the process (4 vCPU). Is it normal ?

    Hi,


    I'll cpoy/paste a topic I've opened on Proxmox forum regarding I/O issue. Maybe you'll be able to help !


    -------------------


    Hello,


    I have @home a small 3 nodes cluster running PVE 4.4.12. It's based on 2x NUC with core i5 and 16GB and formerly, a virtual node based on a virtualbox instance on a NAS just used for Quorum purpose.


    As I've planed to replace one of my NAS (Syno) by an HP µserver Gen8 I've installed PVE on it too in order to get rid of the virtual node and virtualize the NAS instead.


    The HP boots on a dedicated SSD and I've configured a R5 based on 4x2TB which in mounted under /var/lib/vz and acts as a local storage.


    When I "bench" the local storage, using rsync --progress or scp, I'm able to write from local SDD to R5 at an average of ~250MB/s and from a remote client at an average of ~110MB/s (limited by the 1GB connection).


    I've created an OpenMediaVault3 VM localy with a 16GB Virtio/qcow2 for the OS and a extra 4.5TB Virtio/qcow2 for the DATA:


    bootdisk: virtio0
    cores: 4
    ide2: none,media=cdrom
    memory: 4096
    name: vmomv1
    net0: virtio=7E:6F:E9:E8:3B: D0,bridge=vmbr0
    net1: virtio=B6:F1:4E: D1:A7:61,bridge=vmbr1
    numa: 0
    onboot: 1
    ostype: l26
    scsihw: virtio-scsi-pci
    smbios1: uuid=4f89b895-e7a0-46ee-a95f-6a441a116191
    sockets: 1
    virtio0: local:114/vm-114-disk-1.qcow2,size=16G
    virtio1: local:114/vm-114-disk-2.qcow2,backup=0,size=4608G


    When I write files to this VM (I've made tests with a basic Jessie VM too), being using smb, nfs or scp/rsync, throughput starts around 80MB/s for few seconds and then drops to few MB/s, sometime stalled, then increase to 50MB/s and so on ... Average is about 15MB/s for my tests based to a single 2GB file.


    During that period, Proxmox shows IO Delay about 1 to 10% on the GUI. If someone could help me to explain/twink/analyze this behavior I'll be really grateful !


    PS: I've tried adding RAM and vCPU to the guest, tried several kernels (3.16/4.8/4.9), disk emulation (ide/scsi/virtio) and caching options with no luck ...


    -----------------


    Thanks in advance for your help



    Olivier

    Same here, was the case in my previous physical installation and now in my virtualized OMV3 (3.0.65).
    /etc/exports is populated, entry for bind in /export/new-share is added in fstab but the mount /export/new-share command is not ran during configuration saving.


    Extra: A check should be done when adding extra options with space before and/or after commas with a warning. It should be better to have a warning while adding the share than an error while applying new configuration. Maybe the "Save" button in "Add share" should just remove spaces in options string ...

    Hello Aaron,


    I've reinstalled my server as a Proxmox node and made an OMV3 VM.
    So far everything's OK but I have 2 questions, most related to PM than OMV but as it's kind of a cross thread maybe you'll help anyway ;-)


    1.- My VM is based on 2x qcow2 disks, vda 16GB and vdb 5TB. How can I backup only the sda under PM ? All my VMs are based on a single vdisk so far ...
    Found the "no backup" option at the vdisk level ...


    2.- As R5 is managed by PM now, what kind of monitoring should I use for both SMART and MD health ?


    Thanks in advance,


    Olivier

    I don't think you can passthrough a drive without VT-d. If you did have VT-d, you can passthrough each drive and the array will assemble in the guest. I ran my server in proxmox like that for a long time. Mounting the array in proxmox wouldn't allow you to keep your data but it would solve question #2.

    It looks like I always miss something to make this configuration viable ...
    I've doubled check and my processor is not VT-d so no pass-through for the OMV VM. Next.


    I could install mdadm directly on Proxmox but wiki says mdraid is not supported for any version of Proxmox VE ... OK, let's install it anyway. It will probably discover my R5 out of the box and I'll be able to mount it under /var/lib/vz in order to have a 5+TB local storage. Datas are still present in the md but I can not just present a host directory to the guest right ?
    So let's get rid of the datas, I'll just have to restart rsync tasks from the Syno, not a big deal.


    Should I keep this single md and use it to hosts qcow2 ? Should I create several vDisks for the OMV VM ? Like a 32GB for the OS and a 5TB for the datas ? Same for the "iSCSI VM" ?


    Advice welcome !


    Thanks,


    Olivier

    Hello there,


    As I have question regarding OMV virtualization with Proxmox I'll take my chance here instead of starting a new topic :-)


    Here is my current environment @home:


    2x Proxmox physical servers (i5/16GB NUCs) hosting several HA VMs (DNS, DHCP, Firewall, Apache, Gateway, remote DSL/4G access, DL tools, etc, ...)
    2x Synology NAS (412+/415+)
    2x Proxmox virtual servers (VBox), one on each NAS


    All 4 Proxmox are in the same cluster and have a quorum vote. Only physical hosts are hosting VMs (HA group).
    Outside of VBox, NAS are serving iSCSI/NFS/CIFS and xFTP/rsync (+some Syno tools).


    I've planed to replace the oldest Syno by an HP µserv Gen8 running OMV3. So far it is installed (i3 3240/8GB), most datas have been migrated including the VBox instance. I've got to play around with kernels, using Aaron's VBox packages, and regarding iSCSI I had to configure it outside of OMV (ietadm) as File IO LUN is not available using the openmediavault-iscsi plugin.


    On another topic I created about iSCSI, Aaron pointed me to the fact I could virtualize OMV and the iSCSI target. I can see some advantages in this solution.


    1.- If I install Proxmox on the Gen8, I can get rid of the 2x VBox instances as I'll have a 3rd physical node to add in the cluster.
    2.- I'm no more dependent on VBox (and iSCSI) regarding the kernel I use in OMV, this makes upgrades safer.
    3.- The remaining Syno will get rid of its last "community" package and won't suffer ugrade outages as well.


    Now the estimated drawbacks:


    1.- I've created a single R5 using 4x2TB whole disks on the Gen8:


    root@omv1:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sda[0] sdd[3] sdc[2] sdb[1]
    5860150272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
    bitmap: 2/15 pages [8KB], 65536KB chunk


    I'd like if possible keep the data already transferred... The Gen8 CPU is VT-x but VT-d, could I assign the 4 disks to the OMV VM and recover the md ? Or should I mount the md under Proxmox directly and ???


    2.- If answer to #1 is yes, I still won't have storage left to create the boot disk for the OMV VM except on the Proxmox's boot SSD and this means that it won't be mirrored. And same for the iSCSI VM who will need its own storage to create the LUNs ...


    Maybe the easiest solution (as all datas are still available on the "to be replaced NAS") is to restart from scratch. So how do I manage those 4 disks in order to:


    1.- Dedicated 100GB R5 for VMs boot disks
    2.- Dedicated 500GB R5 for iSCSI LUNs
    3.- Dedicated remaning GB R5 for OMV usage


    Thanks a lot in advance for your time and advice !


    Olivier

    Hello Aaron,


    Thanks for your detailed answer. I did not get that 3.16 was the default kernel for OMV3.0, I thought is was based on 4.x ... So no problem here.
    And if you don't see any potential issue with configuring and managing iSCSI LUNs "outside of OMV" it's fine for me.


    Now regarding your last sentence, I know it is outside of the original question scope, but could you be more specific about your configuration ?


    Right now I have 2 physical Proxmox nodes on dedicated hardware and 2 NAS serving NFS and iSCSI for the PM cluster, and as well hosting each a virtual Proxmox node (VBox) used for quorum. This way, I'm able to use HA on a PM group based on the 2 physical nodes and I can "lose" one physical node and still have a valid quorum of 3 votes.


    I can not do anything else with the Synology NAS but I could convert the Proliant running OMV to a 3rd PM node. Then create a VM localy on this one with direct access to the 4x SATA disks and install OMV on it ? And then this virtualized OMV will serve NFS/CIFS for my network (PM nodes, Windows shares, media server) and another VM used as dedicated iSCSI target ?


    I can imagine how it would work and mostly how I could maintain it ... But it has sort of "inception" taste don't you think ? So please, give me more detail if you can find some time I'd really appreciate your output on this configuration !


    Cheers,


    Olivier