Posts by vshaulsk

    I run a proxmox 3 node cluster with several OMV vm's and one freenas VM.


    My nodes are all dell: one R720xd and two R620's (All my OS disks and disks used for VM's on the hosts are SSD's / NVMe)


    My systems run 24x7, but I am also a little concerned about power usage, so I do the following without really any noticeable performance issues.


    - on the proxmox host I change the governor from performance to powersave, this does the most with a 20 to 40 w drop
    - my OMV VM it is a little more complicated:
    - I have 20 disks running in a separate disk shelf connected through LSI 9300-8e; The HBA is passed from the proxmox host to the VM (PCI passthrough), which allows the OMV vm to see the disks directly and have full control. I have the disk set to go to sleep through the smart setting within OMV, which saves a little power.

    An update regarding my SSD testing / problem


    I installed a pair of intel DC S3610 400gb units and they functioned as expected, whether being tested individually or together in raid 0


    I then formatted and trimmed the Samsung 850 Pro SSD's using windows; Reinstalled a couple of them into OMV and they function properly.
    - It is if they needed to be trimmed, which I thought was being done through linux already.



    Maybe this is the difference between the enterprise grade SSD vs standard ......


    - I was going to use the 850 Pro's in OVM as central shared storage over ISCSI/NFS for 3 node proxmox cluster.
    - Now instead of the 850 pro's, I think I am going to use my enterprise drives intel DC S3610 400 GB ....... they are probably better for that workload anyway.


    - Use the 850 Pro's as local storage within one of the Dell R620's - over provision the disks

    I use both the ConnectX-2 and ConnectX-3 SFP+ single port 10G Nics


    I have also used a Chelsio dual port NIC


    This is in both OMV3 and OMV4



    No issues with standard OMV install or function.



    However, if I run the dual Chelsio NIC and the Proxmox kernel, only one of the NIC's is recognized. This is the same within my two OMV servers or three Proxmox servers.
    I believe this issue is related to how the NICs get recognized and named under the Proxmox Kernel
    - There is a fix for this, but you have to use the command line. Revert back to the old name scheme of eth1, eth2 ..... vs the new scheme of such as enp2s0 etc....

    I originally was running a raid5, raid50, raid10 and individually for testing


    With my SSD array the performance has always seemed really strange (raid10 & raid5) since the beginning of the year, but now I started really looking into it.


    I would like to ultimately use the SSD array for centralized VM storage over NFS or ISCSI, so that I can play around with high availability on the proxmox nodes.

    I just VPNed into my systems and ran a different test:


    transferring large 100 GB .mkv file over the network, while watching IOSTAT 2


    I wanted to see what happens to the drives/arrays during this large network transfer


    HDD 12x4TB raid 50: I saw write speeds of around 760 MB/s for the array


    SSD 8x250gb raid 0: I still only saw a maximum of around 230 - 250 MB/s
    - same if I transferred 10GB of very small files ... still 230 MB/s max


    Watching IOSTAT confirms what I see when just looking at network transfers. Transfers to my HDD array go really fast no matter how large the file size is, while transfers to the SSD array are quick for a few seconds until network cache is full and the data starts dumping to disk at which it slows down to the 200 MB/s as that is what the SSD array is actually writing at.


    Now I totally don't understand whats happening.... the IOZONE test would say my storage in total is really slow, but watching IOSTAT is telling me that the HDD array is capable of fast writes, while the SSD is showing slow writes

    Going to the basics is my next step.


    1) fresh basic install of OMV 4 .... maybe even try OMV 3 just for comparison
    2) test SSD drive attached to sata II port - if OK move to step 3 ....... if not OK, motherboard/bios problem ?
    3) test SSD drive attached to LSI 9211-8i HBA 6Gb/s - if not working properly move to step 4
    4) test SSD drive attached to LSI HBA 3GB/s from my previous build


    If 3 and 4 fail on all pci express port, but the drive works directly in the sata port ....... I would think that this would be either a bios setting issue or a motherboard issue ???


    I have 4 other systems, which I will run some tests for comparison: R710, 2 x R620, custom freenas build

    So I tried an older firmware


    version 14 in IT mode, but I am still experiencing the same issue


    I have also tried pulling two of 3 HBA's and just leaving one connected to the SSD array.
    - reset the bios to factory settings


    One last thing I was going to try is a fresh install of OMV on a new disk and just see what the performance is.
    - no addons or modifications ......... perhaps I did something as I was messing with this system

    SSD raid array performance is worse than HDD array when moving large files over network


    Clients: Windows 10 VM's running inside Proxmox hosts; Jumbo frames turned on and connected through 10G SFP+


    OMV: dual CPU 2 x x5670 (2.9 GHz - 24 threads) 48 gigs of ram and connected over 10G SFP+
    - HDD setup: 12 x 4TB in a raid 50 setup using mdadm - XFS file system


    Moving files from windows clients to OMV HDD array over cifs/smb works great with speeds around 800 MB/s +; Even small random file transfers work pretty well.


    I also have 8 x 250GB samsung 850 pro SSD's, which I have been playing around with.


    However, the performance is worse from the SSD's vs HDD


    I have setup single drives, raid0, raid5, raid50 ..... with xfs file system and I keep seeing the following:
    - Moving files around 6 - 9 GB, works without issues: speeds same as HDD array
    - However as files go above 9GB in size the transfer drops like a rock. On a 4 disk raid 0 I see speeds drop to 130 MB/s, but if I run single disk it goes to 30MB/s .....
    - I also see my IOWAIT and system load go up as compared to HDD array ..... system load = 8+ and IOWAIT 4% to 12% (moving files to the HDD array does not really change the system load or IOWAIT)


    I have even tried ZFS and the performance is even worse.
    changed the SAS controller, but that did not make a difference. SAS controller = LSI 9211-8i HBA fully updated and in IT mode


    Has anyone run into anything like this or know what else I should look at?

    For the network, so far I changed some of the kernel memory buffers, network ring buffer, 9000 MTU and a few of the SMB specific settings from the sticky found on this forum. (I can share the exact settings .... I was going to post about my experience once I have the switch setup and run some more tests/tunes)


    My Iperf connection shows a 6.5 Gigabit connection (still not sure why it is not full 10 gigabit) .... tried different cards and SFP+ cables, but seem to always show the same connection speed
    - Direct connection using a DAC SFP+ cable (the unifi switch has been on back order forever)
    - Connected to a R710 running proxmox 5.1 host and testing using a windows VM running installed on a Samsung 960 EVO directly passed to the VM; I also have NFS shares mounted via a virtual disk form proxmox (NFS is sitting on my ssd raid10 setup


    I went form having speed which went up and down from 650 MB/s to 250 MB/s and now pretty consistently I am seeing 750 MB/s to 790 MB/s on via Cifs (from VM to storage array .... the other direction is slower, again not sure why
    Using NFS I sometimes see all the way up to 900 MB/s, but if the file is a couple of GB the speed drops down to 200/300 not sure why

    Again, I understand that it is not best idea and you do not recommend it.
    My SSD's are the same brand, but have been purchased at different times. I started with 4 from one purchase, 2 from another and again 2 just a couple of days ago. So if I pair them correctly, than the chances of them failing all at once would be reduced enough for my comfort level. With the backup in place, I will be sleeping soundly at night.



    However, my original question about raid10 mdadm configuration and EXT4 portioning are yet to be answered. Are the default perimeters in (mdadm/EXT4) OMV the correct setup for high performance raid10 using SSD's


    From recent experience the default network parameters are good for 1GB Lan, but not for 10GB Lan. I am wondering if the same is true for my disk setup

    tkaiser ... I understand what you are saying about SSD, but in my budget and use case I think I will be OK......


    - My system is backed up to a FreeNas & OMV Box + some of the data is also backed up onto USB & remote storage (In theory my LAN users have the ability to access the same data from three locations on the network (VStorage1, VStorage2, Vstorage3, but through Zentyal AD I automatically mount the shares I want them to use and also through permissions the other targets won't let the users write). If there is a major issue, I can change the permissions and logon script through AD to mount a different share ..... just the file transfer performance would be reduced


    - Sequential transfers don't really matter to me on the SSD array as the big data sits on a 6x4TBx2 raid 50 array (also backed up to FreeNas + another OMV system)


    I am not using ZFS on OMV because for production I wanted to stick with a native file system provided by OMV .... I have fooled around with ZFS on OMV in the past, but the experience with the plugin was so so
    - Also wanted maximum performance with my budget available hardware


    I would like the SSD array to have high IO and great performance.
    - Used for small files being modified, opened and transferred by users through SMB and also I have virtual machines targets through NFS for my proxmox cluster (VM storage)


    I want to make sure that OMV defaults for Raid10 SSD array are good or should I create my own Mdadm array through the command line.
    I would like to stick with the EXT4 file system, but also want to make sure that OMV configures and mounts it with the correct options.


    I am not entirely sure how to make sure I am getting the best performance possible out of the array. Right now I feel (no real data) that my SSD array is the same or slower vs my mechanical raid 50 (7200 RPM enterprise drives)

    I am currently running a raid 10 with 6 x 250 GB samsung 850 Pro SSDs with an EXT4 files system.
    In the next couple of weeks I am going to remake the raid array using 8 x 250 GB Samsung 850 Pro SSDs with an EXT4 file system


    On this array I house the user home directories, business files, docker, plex, NFS mounts for VM's


    I am trying to figure out if the default values for when OMV creates a raid 10 with SSDs is correct. I would like to maximize the performance of this raid.


    System details: drives attached via an LSI SAS HBA 6GB/s - 8 port, so each drive has its own connection. The system itself has 48 gigs of ram and dual Xeon X5670 CPU. The network is 10 GB SFP+



    The default setup of mdadm is a 512k chunk ..... is this correct for an SSD setup ?
    Now for the EXT4 file system are the default /etc/fstab sufficient for SSD Raid? I see the discard flag, so I am assuming trim is active, but should something else be set as this is an SSD high performance setup ?


    Thank you !!

    Well maybe I should start thinking of upgrading .....


    My OMV3 system has been running perfectly, so I am a little hesitant ....... however I should probably put together an upgrade plan



    I have seen some posts with users upgrading from 3 to 4 with various levels of success.


    The other option is to just perform a new install and reconfigure everything..... kind of a pain



    All the plugins I use are available in OMV4 except the sensors plugin :(
    - Antivirus
    - Docker
    - FTP
    - MySQL
    - NFS
    - Nginx ???
    - Rsnapshot
    - Rsync
    - SMB
    - SNMP
    - UPS
    - WOL

    Just trying to figure out what the advantages are of OMV4 vs OMV3 ?


    I know that OMV4 is on top of a newer version of Debian, but what are the benefits of upgrading?



    My main system is running OMV3 and I have a VM running OMV4
    ...... Other than they look slightly different and not all of the plugins are available in OMV4, it functions about the same.


    This makes me think that the real changes or advantages are under the hood and unfortunately I am not knowledgeable enough to understand what the advantages of OVM4 are.
    - Why should I or someone else upgrade ?


    Thank you for any info.

    I run plex in docker for OMV 3 and it updates automatically when I restart.... which is way easier than having to update through command line for plex pass.


    In the beginning I did have problem with plex docker starting the transcode, but that was because my config directory was placed on my storage pool.


    The storage pools within OMV are mounted with No Exec which prevents plex docker from working properly.
    I had to change the storage mount with exec flag and now everything works properly.

    I think whether you do an all in one NAS/Server or separate systems depends on you.


    I run in production for a very small personal business the following setup:


    1 x Dell R710 on which I run proxmox & Docker ( running unifi, Active Directory,email server, LibreNMS, Webserver, Nextcloud, windows 10 VM) + whatever I want to test (New OMV version etc...)
    - This is system runs 6 x 1 TB 7200 RPM disks in a raid 10 and the windows VM is on a Samsung 960 Evo M2 SSD attached through a PCI express add on card
    - I run gigabit Nics and 10 Gigabit SFP+ Nics
    - Dual processor system with 48 GB or ram and pulls around 200/220 watts on average


    1 x OMV NAS 3.0 Build: It is a dual xeon CPU system with 48 gigs of ram with the OS installed on an SSD (I do not change a lot on this system and it runs stable)
    - one storage system 6 x 250 GB SSD in mdadm raid 10 with ext 4 file system
    - second storage system 12 x 4 TB 7200 RPM disks setup in mdadm raid 50 using XFS file system
    - This is my main storage running SMB, NFS, Docker, RSYNC + a few other plugins (Docker runs plex, plexpy + a few other stable docker containers I use for business and don't fool with)
    - I also use this as my backup for R710 and as NFS mounts over the 10 gigabit network
    - Power Consumption at startup is around 350 and settles to 250 watts .... maybe slightly more


    1 x OMV NAS 3.0: running dual xeon processors 24 gigs of ram installed on SSD (Oldest hardware and used as a backup for my main OMV)
    - 8 x 2 TB 5900RPM NAS drives in a mdadm raid 6 with EXT4 file system
    - I currently manually turn this system on once a week (can't get auto shutdown/startup working)
    - System uses RSYNC to back all critical information from my OMV server over 10 gigabit SFP+
    - Also has external USB attached for backup which I disconnect when I turn off the system and is kept a different location (may in the future backup directly off site)
    - System uses about 200 watts when on



    Basically my R710 is backed up on my main OMV1 and OMV 2 backups OMV1 with USB backing up the most critical data and taken off site.
    Combining docker and full VM, I am able to run and test any other software/programs I need.


    Personally having the Dell R710 lets me try new software and even test new versions of OMV without worrying about breaking my main storage array or backup strategy.


    In the end I think it all depends on your power consumption concerns and how many systems you want to manage.