Resurrecting this thread. Still a problem.
Beiträge von nexusone
-
-
BTW here is how I am testing it under OMV using tests published by storagereview.
http://www.storagereview.com/f…ester_synthetic_benchmarkI need to get around to booting back into Solaris to get a good comparison. If you see anything wrong with how I'm testing please point it out.
337k Random Read IOPS with 4k blocks isn't bad for off the shelf components.
root@NAS:/media/abbc8605-6e68-4d2e-a1e6-839d087b6b14# fio --filename=/dev/fioa1 --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=8k --rwmixread=70 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=8k7030test
8k7030test: (g=0): rw=randrw, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=16
...
8k7030test: (g=0): rw=randrw, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=16
2.0.8
Starting 16 processes
Jobs: 16 (f=16): [mmmmmmmmmmmmmmmm] [100.0% done] [813.7M/348.9M /s] [104K/44.7K iops] [eta 00m:00s]
8k7030test: (groupid=0, jobs=16): err= 0: pid=16179
read : io=51992MB, bw=887157KB/s, iops=110894 , runt= 60012msec
slat (usec): min=2 , max=2146 , avg=19.31, stdev=20.32
clat (usec): min=1 , max=207341 , avg=2073.32, stdev=10101.48
lat (usec): min=27 , max=207378 , avg=2092.93, stdev=10101.55
clat percentiles (usec):
| 1.00th=[ 187], 5.00th=[ 203], 10.00th=[ 217], 20.00th=[ 241],
| 30.00th=[ 270], 40.00th=[ 298], 50.00th=[ 330], 60.00th=[ 370],
| 70.00th=[ 438], 80.00th=[ 652], 90.00th=[ 1656], 95.00th=[ 2224],
| 99.00th=[55552], 99.50th=[76288], 99.90th=[123392], 99.95th=[146432],
| 99.99th=[183296]
bw (KB/s) : min=31104, max=70528, per=6.26%, avg=55496.59, stdev=4873.52
write: io=22300MB, bw=380510KB/s, iops=47563 , runt= 60012msec
slat (usec): min=1 , max=6481 , avg= 9.41, stdev=12.67
clat (usec): min=0 , max=11835 , avg=485.02, stdev=375.43
lat (usec): min=28 , max=11840 , avg=494.69, stdev=376.14
clat percentiles (usec):
| 1.00th=[ 108], 5.00th=[ 149], 10.00th=[ 177], 20.00th=[ 231],
| 30.00th=[ 282], 40.00th=[ 338], 50.00th=[ 394], 60.00th=[ 462],
| 70.00th=[ 540], 80.00th=[ 652], 90.00th=[ 860], 95.00th=[ 1128],
| 99.00th=[ 1912], 99.50th=[ 2384], 99.90th=[ 3824], 99.95th=[ 4256],
| 99.99th=[ 5280]
bw (KB/s) : min=13696, max=31040, per=6.26%, avg=23802.59, stdev=2248.02
lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
lat (usec) : 100=0.19%, 250=23.18%, 500=48.50%, 750=11.60%, 1000=3.80%
lat (msec) : 2=7.96%, 4=2.10%, 10=0.32%, 20=0.48%, 50=1.05%
lat (msec) : 100=0.66%, 250=0.17%
cpu : usr=2.91%, sys=8.12%, ctx=12457814, majf=0, minf=415
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=6655011/w=2854397/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=51992MB, aggrb=887157KB/s, minb=887157KB/s, maxb=887157KB/s, mint=60012msec, maxt=60012msec
WRITE: io=22300MB, aggrb=380510KB/s, minb=380510KB/s, maxb=380510KB/s, mint=60012msec, maxt=60012msec
Disk stats (read/write):
fioa: ios=6645181/2850193, merge=0/0, ticks=13752180/1339596, in_queue=1188868, util=100.00%root@NAS:/media/abbc8605-6e68-4d2e-a1e6-839d087b6b14# fio --filename=/dev/fioa1 --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=100 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest
4ktest: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=16
...
4ktest: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=16
2.0.8
Starting 16 processes
Jobs: 16 (f=16): [rrrrrrrrrrrrrrrr] [100.0% done] [1316M/0K /s] [337K/0 iops] [eta 00m:00s]
4ktest: (groupid=0, jobs=16): err= 0: pid=16228
read : io=78418MB, bw=1306.9MB/s, iops=334555 , runt= 60005msec
slat (usec): min=1 , max=17847 , avg= 9.46, stdev=28.20
clat (usec): min=0 , max=69459 , avg=753.73, stdev=3711.59
lat (usec): min=13 , max=69486 , avg=763.50, stdev=3712.37
clat percentiles (usec):
| 1.00th=[ 89], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 101],
| 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 110], 60.00th=[ 114],
| 70.00th=[ 120], 80.00th=[ 133], 90.00th=[ 366], 95.00th=[ 1004],
| 99.00th=[21888], 99.50th=[29312], 99.90th=[42752], 99.95th=[48384],
| 99.99th=[57088]
bw (KB/s) : min=19008, max=102664, per=6.25%, avg=83613.59, stdev=9722.02
lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.03%
lat (usec) : 100=16.59%, 250=71.57%, 500=3.55%, 750=2.09%, 1000=1.13%
lat (msec) : 2=1.03%, 4=0.58%, 10=1.10%, 20=1.16%, 50=1.11%
lat (msec) : 100=0.04%
cpu : usr=4.22%, sys=14.11%, ctx=16293742, majf=0, minf=405
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=20074992/w=0/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=78418MB, aggrb=1306.9MB/s, minb=1306.9MB/s, maxb=1306.9MB/s, mint=60005msec, maxt=60005msec
Disk stats (read/write):
fioa: ios=20034349/0, merge=0/0, ticks=14117132/0, in_queue=0, util=0.00% -
nexusone:
Can you please add links to the drivers or commands to install the drivers and utilities. Then we can make a guide out of it.
ThxSorry, the drivers aren't publicly distributed. People will need to download the drivers from their FusionIO support account. The installation is trivial.
1) Download your driver and utilities package somewhere
2) Unpack the archive
3) Install the package (e.g. dpkg -i *.deb)Here is a dump of bash history that might help someone. Adjust the version numbers to match whatever you have of course.
1) apt-get install pciutils
2) apt-get -f install
3) tar xvf fusionio-files-xxxxxxxxxxxx.tar
4) cd fusionio-files-xxxxxxxxxxxx/ioScale/Linux_debian-wheezy/3.2.9/Utilities/
5) dpkg --install *.deb
6) cd fusionio-files-xxxxxxxxxxxx/ioScale/Linux_debian-wheezy/3.2.9/Software\ Binaries/
7) dpkg --install iomemory-vsl-3.2.0-4-amd64_3.2.9.1461-1.0_amd64.deb
Reboot
9) Verify driver is loaded and working using fio-statusHere is a list of the package I installed :
fio-common_3.2.9.1461-1.0_amd64.deb
fio-preinstall_3.2.9.1461-1.0_amd64.deb
fio-sysvinit_3.2.9.1461-1.0_all.deb
fio-util_3.2.9.1461-1.0_amd64.deb
libvsl_3.2.9.1461-1.0_amd64.debIf you want to update your firmware :
1) cd fusionio-files-xxxxxxxxxxxx/ioScale/Linux_debian-wheezy/3.2.9/Firmware/
2) dpkg --install fio-firmware-fusion_3.2.9.20141008-1_all.deb
3) fio-update-iodrive fusion_3.2.9-20141008.fff
4) Reboot -
For anyone finding this thread later....
1) Get your debian drivers for your card
2) Install the VSL and supporting utility packages as desired (Use SSH. You can't do this from the GUI.) Reboot to load drivers. You can verify operation by using fio-status from the CLI, assuming you installed the supporting utility packages.
3) Create your volume and mount like normal.
4) Make sure you configure the driver with an snmp trap to send alerts on alarm conditions, if you're worried about that.
5) Enjoy your OMV+FIO awesomeness.
6) The physical disk screen does NOT show model, serial number, or vendor of the device. Don't worry about this, it's working fine. Probably something easily fixed with some hacking to pull status from the fio-status tool. -
That update to system.inc seems to have done the trick. Nice work!
-
I can manually mount the volume from the cli and it will show up in OMV now. It still won't mount the volume from inside OMV. Odd.
-
Manually wiped the FIO volume, now I'm able to create a volume inside OMV. Still can't mount it however.
-
So it shows up in physical disks, but I'm unable to create or mount it from within OMV. So close.
Thanks for the help.
-
Other than copying this file into the correct location, is there anything I need to do? I don't see anything changed.
EDIT : Nevermind. A good old reboot cleared whatever was cached. Let me see how this works.
-
Thanks. I'll poke around and experiment with this.
-
looks like the hp raid status plugin is a good basis for collecting status from the fio device. They use a tool called fio-status that creates very similar output. no problem there. I can hack that up after work today.
Can you point me to an example of a storage backend?
Thank you.
-
That's unfortunate. It presents exactly like any other storage device. There is nothing special about it once the device driver is loaded, which it is. If you want to collect device status and such from it, then sure I understand that you need something unique to the device, but to mkfs and mount? That seems like it's probably a matter of adding the device identifier (/dev/fio*) to a list somewhere.
Thank you for the response. I'll poke around some more. Worst case I'll just symlink it into an existing share and be done with it.
-
solo0815 : fio is a FusionIO card. PCIe attached SSD.
-
When I manually create the filesystem with mkfs.ext4 it creates the UUID and then the partition shows up on OMV but I still can't mount it from the gui and the physical device doesn't show up. Mounting it manually from the cli works perfectly.
Help. How do I tell OMV to behave?
-
Where do I configure what device types show up as storage devices? OMV is not recognizing my /dev/fioa device. It works fine. I can format and mount from cli, but OMV ignores it.