Beiträge von vshaulsk

    well thank you for your comments and reading material.



    I wanted to say a few things about my implementation.


    1) OMV is used for storage of the media files when the team is not working on them.
    - When one of the members is working on a project the file he/she needs is transferred onto the work station as they are equipped for video/photo editing
    - Usually the same files are not accessed at the same time, but the system maybe writing and reading multiple different files at the same time as employees work on different projects.
    - Once finished with the product they send the project back onto OMV


    2) Plex is used for video preview/review for certain clients which is why right/wrong I chose OMV to have decently powerful hardware



    Now I am still debating about ZFS or going back to mdadm raid6 + ext4 ..... currently playing with ZFS, but may go back to OMV native setup as I am worried about ZFS issues as OMV updates. I really need everything to be stable, mostly bullet proof and have great performance.



    Using the 3rd party software for snapshots of ZFS is something I would like to avoid as I would rather use the tools already within OMV. I don't have a lot of time or the knowledge and if things go wrong .... I will see some frustrated people looking at me.



    Comparing the Bonnie++ output vs the Helios lan test. The results of the Helios lan test give much lower results vs what I see when doing network file transfers.


    Currently for testing I am using a windows VM running on the Dell R710 as the work stations are not yet connected over 10Gb.
    - Transferring smaller files (test file 20 GB) results in write speed to OMV 600MB/s to 700 MB/s .... writes to the VM are much slower. I am limited by the disk speed of the VM


    - Transferring large files 78 GB result in write speed to OMV 250 MB/s to 300 MB/s ..... not sure if this again the cause of the VM not being able to push data quickly enough


    However testing with Helios showed write speeds of 150MB/s to 200 MB/s so not even what I was seeing in the real world tests for single client.


    I understand with multiple clients at the same time the real world test would be slower.


    Helios also shows much lower performance over Gibabit vs what I actually see in real life.

    This brings me back to my original dilemma of which file system/setup I should use.


    I have good experience with mdadm raid6/ext4 as this is what I have been running for years. No issues, no trouble rebuilding the array, no trouble growing the array..... no issues when I have had to hard reboot the system.


    ZFS is definitely interesting so this is why I am giving it a try. However it is not native to OMV and I won't use some of its features like snapshots.
    I typically use Rsynce and Rsnapshot for backup to other storage (soon to be my old system).


    I am still not sure that ZFS is the right choice as mdadm raid 6 using ext4 or maybe even using the other linux native filesystems btrfs or xfs would not be better ?????



    Other than the typical home and media files this system is also used for my wife's business (wedding, events, etc... photos and video). Now that everyone has moved to 4k the raw files are huge which is why I built the bigger storage array and also for the workstations they will use 10GB lan. 4 people moving up to 100 GB files does take some time over standard gigabit.


    All the business files are backed up to second storage and also depending on the final product size are put on USB drives.



    So again not sure what benefits ZFS would offer me ..... bitrot ? better data protection ? file system repair?

    Thank you for the link !!!


    That was an interesting read and I will need to read through it again a couple of times.



    I guess my quick test just showed what I would expect ..... that the new system is much faster vs the old system.



    I am wondering now how my setup of ZFS would compare against the tradition mdadm raid 60 + LVM + Ext4 ....... I might just run a few passive tests just to see what the output is. Just don't know if I want to wait so long for the raid to build .... HAHAHA
    ZFS was so fast to put the pool together !!!

    Finally got the system up and running.
    Had to replace all the SAS cables that went from the HBA to sas backplane of the norco case. The ones that came with the HBA were no good.


    I decided to try ZFS plugin for OMV.
    The plugin seems to work alright.
    It gives an error if you try to expand a pool, but when you exit out and look at the file system the expand works.


    Below are some comparisons using BONNIE++ between my production OMV system and this new one.


    I also have done some real transfers using 10 GB SFP+ connections transferring 20 GB file over SMB from the work station to storage servers. This transfer confirmed the results of Bonnie++



    Results + command ran (command was from https://calomel.org/zfs_raid_speed_capacity.html post).... hopefully it is the right one. I did change the ram entries as the old system has 24GB of ram and new one has 48GB of ram



    Old System:

    bonnie++ -u root -r 1024 -s 28672 -d /srv/dev-disk-by-label-home/test -f -b -n 1 -c 4
    bonnie++ -u root -r 1024 -s 28672 -d /srv/dev-disk-by-id-md-uuid-4af8d8bc-2ac91a1a-349b6049-3e392118/test -f -b -n 1 -c 4


    New System:


    bonnie++ -u root -r 1024 -s 51200 -d /st1/test/test1 -f -b -n 1 -c 4
    bonnie++ -u root -r 1024 -s 51200 -d /st2/test2/test2 -f -b -n 1 -c 4



    Old System:


    2 x 300 GB WD 10K RPM raptor drives in mdadm raid 1 Ext4 file system


    W=105 MB/s RW=48 MB/s R=199 MB/s



    8 x 2 TB Seagate Nas 5900 RPM drives in mdadm raid 6 ext4 file system (advanced sector size and tuned mdadm)


    W= 353 MB/s RW=215 MB/sW=613 MB/s



    New System:


    4 x 256 GB Samsung Pro SSDZFS pool with 2 mirrored vdevs (a shift 12)


    W=634 MB/s RW=340 MB/s R=1463 MB/s



    12 x 4TB WD RE 7200 RPM enterprise drives ZFS pool with 2 raidz2 vdevs (a shift 12)


    W=592 MB/s RW=340MB/s R=1386 MB/s

    I have two OMV systems.... one is the new one above which I have just put together this week.
    The second has been running for years updated to the same version and kernel.


    The difference is the case, HBA's and has 4 less drives


    - New systems has 12 x 4TB 7200RPM Enterprise drives and 2 SSD's
    - Old system has 8 x 2TB 5900RPM NAS drvies and 2x 300GB WD 10k RPM


    In the old system the HBA's are SAS3 instead of SAS6 like in the new system
    Also they are connected to the drives with SAS - SATA breakout cables
    - New system uses SAS-SAS cables from HBA to the hot swap backplane



    The old system has no problem finding all the drives and my mdadm raid 6 and raid 1 arrays.
    - I would not think that 4 more drives would all of a sudden cause a major problem, but maybe I am wrong.


    The new system ..... I think some of the drive cages are not working so I am still going to check all the cables and HBA. I need to rule out that there is no issue with the hardware.

    I have OMV 3.0.94 with 4.9.0-0.bpo.4-amd64 kernel


    New second server with Norco RPS4220 case
    two 4 port SAS HBA that go to the Norco SAS backplane


    4 mini sas cables



    I am also having this issue when trying to create a raid array. When I click Create I get a "communication error" and no disks show up


    All the disks show up in the system and smart shows no errors



    I tried creating also ZFS Pool with 2 x raidz2 = 2 x 6 disk array and I am also having errors.


    Seems like something keeps timing out with the IO



    I am going to try different cables and test each one of my backplanes and HBA individually, but I have a feeling it is something else.

    Yes I saw the drives still appearing within the plugin as well....this is a major drawback with the plugin.


    What do you mean avoid "export" ? Why would I export the pool ?





    My needs:


    1) Two pools
    - Pool one with the SSD's is for user home folder, business critical shares and files, plex, mysql, databases and some NFS/ISCI targets for Proxmox
    - Pool two is for the large media storage used for the business (4K video and photos)


    Most of the shares are shared through Cifs for the windows workstations.


    I need to be able to do per user/group ACL on the sub folders within a share ..... I think I have figured this out if I turn on the correct attributes within ZFS


    I also create rsnapshot, Rsysnc, usb backups .......
    Run plex, mysql, subsonic, sensors plugins .......beyond this I don't use any other plugins

    Some updates:


    Put together the new hardware:


    Case: Norco - RPC4220 (20 drives - hot swap)
    CPU: 2 x X5670
    Ram: 48 GB DDR3-1333 ECC
    HBA: 2 x LSI 6Gb SAS
    NIC: 10Gb SFP+ (currently no switch so just testing it with direct connect)


    Hard Drives:
    - 4 x Samsung 850 Pro 256Gb
    - 12 x WD RE 7200 RPM 4TB
    - 2 x 32 SSD (Was going to use for OS ... hoping I can somehow create a mirror OS disk, but if not I will just have one and maybe use clone weekly to second drive)



    I wanted to try ZFS. Thought about switching to FreeNas for this build as ZFS is native, but I really like all the plugins in OMV and I am way more familiar.
    This brings me to implementing ZFS in OMV


    Pool 1 = 2 mirrored Vdevs out of the SSD disks
    Pool 2 = 2 x raidz2 Vdevs - I think this will give me the best space utilization combined with performance


    I have played somewhat with the ZFS plugin in a VM, but I seem to get errors when trying to add more Vdevs to a pool.
    Maybe I should not be using the GUI, but should be using the CLI ?


    What about tuning the system for performance/reliability ? I know that in FreeNas you have autotune which has some benefits.



    Once I get this setup, I will run some benchmarks against my current production system and post the results.

    Tried a new ... different SFP+ DAC passive cable and still can't get the NIC in OMV to come up.


    On the proxmox side the NIC says it is up and the amber light is flashing.
    On OMV it is down and just has the solid green light.... the light turns green when the cable is plugged in so the NIC shows that the module is plugged in.


    The drive is cxgb3toe which is the one for linux form the Netapp Chelsio S320 website. (Same driver on both Proxmox and OMV)
    Ethtools shows the card and it is recognized by the system ..... comparing Proxmox and OMV everything seems to be the same.



    Has anyone successfully used the Dual 10GBe SFP+ Chelsio Neapp S320 card with OMV ?
    What SFP+ cards have people successfully implemented ? Direct connection or through a switch ?

    recently purchased several Chelsio dual SFP+ 10GB interface cards and DAC cables.


    I am wanting to direct connect my soon to be two NAS systems (OMV) and my Dell R710 running proxmox.


    Unfortunately, I can't seem to bring the SFP+ interfaces UP on my OMV.
    On the proxmox system, I have no problem getting the interface to come up ......
    In OMV however the system sees the card, loads the driver and in the GUI I can set the interface information (Set static IP with the correct subnetmask, but left the DNS and gateway empty)
    when I look up the interface status ... 'IP LINK SHOW' it always shows the SFP+ interfaces as down and I can't bring them up.


    I have tried different cards, but all have the same problem.


    I have tried both of the cables I have and still same problem



    On the proxmox side the link shows up and the amber light is flashing as if it is trying to send data.


    On the OMV side the light goes from slow flashing green to just solid green when I plug in the DAC cable.


    Nothing happens and the interface won't go up.



    Any ideas ? I have tried searching OMV forms and the web, but so far nothing. I was thinking maybe it is something with the cables or Chelsio adapter ....... I have a different NIC coming and a different cable, but as all the cards work under proxmox it should work in OMV


    The driver even shows the same version .....


    Any thoughts or help would be great .... this is my first dive into 10Gbe and also SFP+

    So I have been thinking about the information above.


    - Created a openmediavault VM and FreeNas VM to learn about ZFS


    - Had no problem creating a virtual environment of either Vdev mirrors or raidz2;
    - Setting the compression and options are pretty straight forward as well.



    One issue which I ran into which I don't see on my OMV VM that I use for testing is the write speeds.


    On my regular OMV VM, it will max out the current gigabit lan network speed when writing files through SMB. This is just one virtual disk (no raid) configured with a BRTFS and EXT4 file system. In either case max out gigabit lan on both read and write. (large files)


    When creating the same one large volume, but using ZFS on either OMV or FreeNas. I am able to get read speeds which max out my lan network, but on write the data transfer is terrible. I loose anywhere from 20% to 50%.


    This behavior makes me concerned that if I ZFS on the production server, that I will loose write performance.



    My current production OMV uses 8 x 2TB (5900 RPM) Seagate Drives in mdadm raid 6 for main storage. Running some tests I believe it can read about 600 MB/s and write around 230/250 MB/s. Which saturates the gigabit lan single point to point connection.

    Iteresting and thank you for the information.


    Raidz2 would perhaps be an option. I would have to get all the disks at once since there is no way to expand the array from what I know.
    Now would 24 gigs of ram be enough? Should I add an SSD for caching if using ZFS?




    If I choose to go the old way of raid6, would EXT4 be the best file system or should I use BRTFs or XFS ?
    I know that BRTFs is a lot newer with more modern features. Just don't know if it is robust enough and whether I would get the same performance. Have read some mixed reviews.

    tkaiser - sorry, I know that ZFS vdev mirror has advantages over the old raid 10 .... I was just trying to relate it to what I know. To understand performance and disk space loss.


    I have thought about ZFS and even am thinking of using it on the couple of R710 servers running proxmox.(from what I understand I will be able to replicate a VM from one node to the other)



    The main concern I have is the huge amount of disk space I would loose going with your suggestion. Yes I would have better performance on the large storage pool + quicker rebuild + better remaining space utilization due to compression.


    However, all critical data on the large storage pool is backed up so if somehow the system fails... the data won't be lost. It will just take some time to recover and bring it back up.
    Also based on my current raid 6 (8 x 2TB 5900RPM Drives) the performance should be good enough (500+ MB/s read and 230 MB/s write). With how the system is used there is not a lot of random file access during the time when we are transferring large files. The new system, should have even faster read and writes. The disks are faster and also on the read side, I maybe limited by the old SAS 3Gb.


    Now perhaps I should use ZFS mirror vdev pool for the 6 x 500 GB SSDs since these disks will see the highest random IOPS, small file access + location for all databases, NFS Mounts, ISCI Luns. I would say the most critical data would actually be housed on this data pool.



    This still leads me back to my question of what file system to use if I chose to stick with the raid 6 on the large storage pool. Whether I choose to go with 4TB or 6TB disks, I am looking at a array size of close to 30TB to 46TB. With such file system would I use ..... EXT4 (currently using) is limited to 16TB from what I have read.


    Thank you !!

    Hello,


    Creating a new storage NAS:
    Specs:
    - Dual Xeon X5670 6-core 12 thread (12 core total 24 threads)
    - 24 gigs DDR3-1333 ECC Registered Ram
    - Norco RPC-4216 Case
    - LSI SAS 6G HBA (2 cards)
    - 2 x 10G SFP+ intel NIC (First experience with 10G & SPF+ so will see how that works out)


    OS will be on its own SSD


    All critical information will be backed up our current NAS which has been running for years with no issues. Same specs, but less storage which is driving this question


    I am trying to figure out based on the different disk arrangements, which is the best file system to use in each case.
    - On current NAS I use EXT4 for each mdadm raid volume; I have had great reliability and great performance.


    New system will have a raid 6 volume over 16TB, so not sure EXT4 will be the choice for that portion of the system.


    Attached is an excel file explaining what I know about the storage system so far. Looking for inputs on which file systems to use based on requirements and usage.
    All Raid arrays are created by OMV

    I don't believe you format the drive with a file system.


    When I expand the raid in my system:


    1) install the new new drive
    2) add it to my raid array and wait for the raid to sync
    3) Once raid is synced, I go to the file system tab and extend the file system.

    Here is the details of the error when trying to activate "Show Cores"