Omv in proxmox - no disks shown in create raid dialogue

  • I'm using OMV 3.0.72


    I'm trying to learn the basics of proxmox, which is currently running as a VM under qemu/kvm as it supports nested virtualisation.


    I created a OMV VM within proxmox with this config:



    I passed through the two proxmox drives to the OMV VM with the qm set command.


    I can see three hdds in the OMV webUI - /dev/vd{a.b.c}


    I can create EXT4 filesystems on /dev/vdb1 and /dev/vdc1 and mount them.


    But when attempting to create a mirror RAID device, /dev/vdb and /dev/vdc do not appear - the device list is empty. They are mounted in the OMV VM:


    Code
    /dev/vdb1 on /srv/dev-disk-by-label-data1 type ext4 (rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)
    /dev/vdc1 on /srv/dev-disk-by-label-data2 type ext4 (rw,noexec,relatime,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)
    rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
    binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
    /dev/vda1 on /var/lib/docker/overlay2 type ext4 (rw,relatime,errors=remount-ro,data=ordered)


    Not tried to create RAID at the CLI. Is this a known bug/limitation of OMV?

  • Did you wipe the drives first?


    Envoyé de mon SM-A520F en utilisant Tapatalk

    No. I had used fdisk at the CLI to put a gpt label on the virtual disks. Umounting, wiping, and re-creating EXT4 filesytems makes no difference. Still no devices listed in the create RAID dialogue. In any case, how can you put a filesystem on a disk that has no DOS/GPT?



    It's not something as simple as the create RAID function not recognising /dev/vdx type?

    • Offizieller Beitrag

    raid doesn't need dos/gpt. Wipe the drives with: dd if=/dev/zero of=/dev/vdX bs=512 count=10000 then see if you can add them to raid. I have used raid in proxmox before but I am running ESXi now so I can't test.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I think OP is trying to create raid in OMV VM , not Proxmox.
    however to create raid the disks should not be mounted.
    if they do have partition table than you need to add partitions which will be available for raiding.
    as it is, the disks are treated by OMV raid as used, hence not eligible for raiding.
    either clear them out or add partitions to them, but do not mount them in any way.
    I did have this tested before in proxmox as well, it works.


    so to recap :
    Either zero out the disks with ""dd"" and see if they appear in raid as raw devices
    or create partition table and primary partitions on them and see if that works
    DO NOT MOUNT them!


    also if above do not work try to pass-through them as scsi device not virtuo.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

    • Offizieller Beitrag

    I think OP is trying to create raid in OMV VM , not Proxmox.

    I agree. That is why I had /dev/vdX

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Forum is not behaving well in firefox-esr .. Where did my last post go?


    It was my basic stupid error, the disks were mounted! Obvious when I went to the CLI. Noted tips for future use ...


    Code
    root@openmediavault:/# cat /proc/mdstat
    Personalities : [raid1] 
    md0 : active raid1 vdc[1] vdb[0]
          52396032 blocks super 1.2 [2/2] [UU]
          [=======>.............]  resync = 36.5% (19150784/52396032) finish=14.7min speed=37463K/sec
    
    unused devices: <none>
    root@openmediavault:/# exit
    • Offizieller Beitrag

    Where did my last post go?

    Sometimes posts are flagged by the sensitive spam filter and require moderation. I don't see any post in moderation queue for you though.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Well, I'm not sure what happened to that post, and I'm sorry for wasting people's time


    Don't know if you'll notice this question, but when should the proxmox kernel in OMV be used? Is this for installing proxmox in OMV?


    To answer @vl1969 in Proxmox itself the disks passed as virtio to OMV actually hang of the pormox scsi controller:


    • Offizieller Beitrag

    Don't know if you'll notice this question, but when should the proxmox kernel in OMV be used? Is this for installing proxmox in OMV?

    I added the option to use the proxmox kernel in OMV because it was a stable 4.4 kernel that still worked with virtualbox, zfs, and maybe iscsitarget when the 4.8 kernel was release from backports. Most of those (except iscsi) compile with the 4.9 kernel now. So, not many reasons to use it. You can't install proxmox on OMV because they have conflicting dependencies.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I added the option to use the proxmox kernel in OMV because it was a stable 4.4 kernel that still worked with virtualbox, zfs, and maybe iscsitarget when the 4.8 kernel was release from backports. Most of those (except iscsi) compile with the 4.9 kernel now. So, not many reasons to use it. You can't install proxmox on OMV because they have conflicting dependencies.

    Thanks for the explanation .... Not sure if I'll ever use Proxmox in anger, but I was interested to see what it can offer.

  • You know krisbee, I have to say thanks to you.
    Because of your post I finally figure out how to make nested proxmox setup work with network properly.
    For some time, I could not make any vms I created in my test proxmox setup which I run as vm in huper v on my work pc. The proxmox would have proper network access but any vms be it a kvm or container would not be getting proper network access. No matter what.
    But today,while trying to test your issue I found a blog which give me answer I needed to make it work.
    Thanks.


    Sent from my SM-N910T using Tapatalk

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • @vl1969


    So, I didn't waste your time after all...


    Can't say networking is one of my strong points. By using qenu/kvm virt machine manager on a debian desktop the networking between debian host and promox guest was auto created by virt machine manager ( bridge & vnet) and between proxmox vm and nested vms it seems to work automagically ( vmbr0 bridge and tap0, tap1 etc. on proxmox itself)


    Returning to your point re: "pass-through them as scsi device not virtuo.". I am not sure if this relates to the idea of one IOthread per drive, as opposed to one for the whole SCSI controller. It's only today that I've read about this on the proxmox wiki:


    https://pve.proxmox.com/wiki/Qemu/KVM_Virtual_Machines


    So I can thank you for prompting to do more reading.

  • well I had a bit more complicated issue on my hands :-).
    I like to do test setups using VM most of the time.
    at work I am using Windows everywhere, so my main hypervisor is Hyper-V
    either on MS Server 2012/2016 or Windows 10 OS.
    I do some of my test using my work PC running Windows 10 and Hyper-V.
    not that I spend a lot of my time on this but when I am on my break or have some downtime
    I might span up a VM and try a config or scenario I am researching etc.
    ever since Hyper-V got the nested virtualization capability(not supported in any way by MS but it is there to use) I have been trying to build a test setup similar to what I plan to run at home.
    but I have been having problems getting the networking to work.


    that is I can setup Proxmox in VM and it have full network access and all, can get to any pc on my network and to the internet just fine.
    but any VM I span up with in the pve, simply locked in to the pve. no outside connection at all.
    if I use NAT on the VM it gets a 10.0.x.x address which is totally not part of my domain network at all.
    if I sue bridged nic it seams like it can get the ipv6 address but no ipv4 (192.168.x.x)
    and even if I do static the address never reach outside the pve host. it's like it was not even there.


    turns out, you need to set the hosts VM (the VM where Proxmox is installed) network adapter spoof MAC address to on. if it is off the network within the network does not work at all.
    if it is on, all is as expected. the host(Proxmox) and VMs can get outside to the real world just fine.




    Returning to your point re: "pass-through them as scsi device not virtuo.".


    well, as per WiKi, it is almost always prefered to use a VirtuO whenever you can.
    if I have to guess it is because VirtuO drivers are as close to bare-metal as you can get.
    lots less overhead over a full hardware emulation, hence the speed and other benefits.
    however sometimes you might need to go old school and use the full emulation driver.
    last time I have come across an issues with hard disks it was suggested to try using SCSI instead of VirtuO and it worked. do note however that using scsi device over VirtuO limits you to 13 drives per VM which is still better than SATA as it has a limit of only 6 sata devices per VM.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • I wouldn't use virtio because of missing TRIM support (only in SCSI!), so your HDDs will get full while you have only a few GBs on the disks. And you have to run fstrim -av manually or by cron.
    And all vDisks on Proxmox are damn slow. (5-35mb/s). I would prefer to create a SMB/NFS share on the host (where you can use ZFS oder Raid or whatever you want) and mount it in the OMV-Guest.

  • Drive passthrough works fine on Proxmox. You have to use virtio and no cache. This gives more than decent performance. The only drawback is that you lose SMART and spindown (which can be handled on the host if necessary).


    IMO, mounting SMB shares on the host defeats the purpose of using a NAS OS in the first place.

  • it works fine, but slow. I made alot of tests with different caches and virtio/ide/scsi. All of them are slow. the fastest was about 60mb/s and that's way to slow for me.
    And move ~100GB to your VM and delete it again. Then tell me: is your disk on proxmox-host then -100GB, too? I say: No! because Trim doesn't work with virtio, only with SCSI. And there only after manually using "fstrim".
    So there are the following options for fast disks with TRIM:
    - passthrough of the disks (for this IOMMU is NOT required)
    - passtrough of the Controller (for this IOMMU is required)
    - using of a LXC-Container with mountpoints (where everything is stored on the host)
    - using Sambe/NFS, because of the internal virtio NIC (=10Gbit) very fast


    If you don't need fileserving or just a small amount of files, then VM+virtio vdisks is good. Otherwise not.


    and: spindown doesn't work on proxmox. Proxmox prevents the disks actively to be spinned down

  • Trim is for SSDs. There is no need for trim on HDD data drives. I got around 100-150mb/s on my setup with virtio/no cache. Since I added encryption, it's down to 60mb/s, which is enough for my usage.

  • That is nothing but wrong. TRIM tells your filesystem/disk controller that empty, erased blocks aren't used anymore. But this isn't supported by virtio. It doesn't matter if it is a ssd or hdd. When the vdisk doesn't tell your disk controller that the blocks are empty, the disk will get flooded with data, while the vm is almost empty. Believe me, I had this problem and tried alot to manage it. But it didnt work. fstrim -av (if I remember right) overwrites every deleted data with zeros and this can be "seen" by the (v)disk controller.
    Take a look here: https://pve.proxmox.com/wiki/Q…m/discard_and_virtio_scsi
    You can test it: take a for example 200GB hdd, create a 50GB vdisk, add it per virtio to a vm, transfer 50gb to it, delete it and do this for 4 times. You will see that your 200GB disk will be full, while the vdisk is empty.


    And 100-150mb/s with what? Ssd? ZFS? Ext4? How did you test it?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!