System check and harddisk/raid speed tests in a nutshell

  • System check and harddisk/raid speed tests in a nutshell


    A collection of useful commands for inventory and speed tests


    Part I: Inventory


    Oftenly people complain about low transfer speeds to their OMV box. This is not always a problem of network drivers, cabling or switching hardware, protocols or client OS versions. If the internal transfer speed of your box is slow then network transfer speed cannot be fast.


    So I thought it would be a good idea to collect system check and testing commands to check what is going on inside your box as a point to start with if you are experiencing slow transfer speeds. This can help the users to gain information about her system and delivers information for the supporters.
    There's a lot of information stored in this forum but they are widespread and sometimes hard to find.


    Ok, let's start with a system inventory. I've tested all these commands on my home box with an Asus ITX board, look at my footer for more details.
    If you know what the OMV system (Or to be more precise: The Linux kernel) detects in your system you can compare that to what your hardware should be capable of and find differences. And you can make sure that the correct kernel driver is loaded for the appropriate hardware.


    1. Hardware oversight


    For this we need the lspci command, a part of the pciutils suite. It is not installed by default, you can install it with apt-get install pciutils. The command without parameters deliver a first oversight, to go into detail we use one of the verbose levels. Because it delivers a lot of information you cannot see completely on the screen it is a good idea to send the output into a file with lspci -v > lspciout.txt and so you have all the information in one file.
    The first one lspci -v of the verbose levels delivers much more details (To go deeper you can use -vv or -vvv). Of interest are the informations about the harddisk controllers:


    Ok, this board has five SATA ports, they are completely detected and running in AHCI mode and the ahci kernel driver is loaded.


    For compatibility purposes the PATA drivers are loaded as well:

    Code
    00:14.1 IDE interface: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 IDE Controller (rev 40) (prog-if 8a [Master SecP PriP])
    Subsystem: ASUSTeK Computer Inc. Device 8496
    Flags: bus master, 66MHz, medium devsel, latency 32, IRQ 17
    I/O ports at 01f0 [size=8]
    I/O ports at 03f4 [size=1]
    I/O ports at 0170 [size=8]
    I/O ports at 0374 [size=1]
    I/O ports at f100 [size=16]
    Kernel driver in use: pata_atiixp


    If your board has SATA ports and is AHCI-capable but the ahci kernel driver is not loaded than the onboard controller was probably not detected correctly due to sometimes exotic hardware. Try to install the backport kernels.


    2. Detect the SATA levels


    Allright, let's do the next check: Detect the SATA level for every single harddisk and it's port. You need the "system names" like sda, sdb and so on to do this, they can be found in the web-gui under storage/physical disks.
    We use the command hdparm -I /dev/sda | grep -i speed and get that:

    Code
    root@DATENKNECHT:~# hdparm -I /dev/sdc | grep -i speed
    * Gen1 signaling speed (1.5Gb/s)
    * Gen2 signaling speed (3.0Gb/s)
    * Gen3 signaling speed (6.0Gb/s)


    The output shows that this harddisk is capable of SATA-III from the board port up to the disk.


    The same for sdb in my system shows that:

    Code
    root@DATENKNECHT:~# hdparm -I /dev/sdb | grep -i speed
    * Gen1 signaling speed (1.5Gb/s)
    * Gen2 signaling speed (3.0Gb/s)


    Aha - a difference, the maximum is only SATA-II even it is connected to an eSATA port which is SATA-III capable as well. But the disk (2,5" notebook disk for the OS) is only SATA-II capable so this is correct.
    If your system can deliver Gen3 signaling (Aka SATA-III) and your harddisks are SATA-III ones but the output does not spit out Gen3 signaling, check if your controller is detected correctly and check your SATA cabling. Old SATA cables are sometimes not SATA-III ready or they become scrappy over the years.


    3. Harddisk details and SATA connections


    Next we check if the harddisks are detected correctly with egrep 'ata[0-9]\.|SATA link up' /var/log/dmesg:


    Oh, that showed a possible cabling problem in my system. Ata1.00 is connected to the same SATA ports on the board like ata3.00 to ata6.00, but the output says that it is limited to UDMA/33 due to a bad cable. In fact I use a different cable for this disk because it is mounted in a 5,25" bay on top of the front and the brand new yellow SATA-III cables I used for the other data disks are too short, so I used an old cable laying around. Hm.
    But Ata1.01 (The OS disk connected to an eSATA port) shows the same and due to the fact that these two disks are obviously connected to the same port group on the board (both ata1. and there was no ata2. found) I came to the conclusion that the board maker has used a port multiplier with different capabilities as the other four SATA ports and they seem to run on PATA-mode (A 40-wire cable is an old style flat cable). Another difference shows that: The two disks (ata1.00 and ata1.01) do not use the AA mode.
    This can be a problem if the speed differs a lot from the other disks, but in my case it doesn't. This disk delivers the same transfer speeds as the other ones.
    Also important is to look if all the disks are using NCQ (Native command queuing) - if one does not, all the other disks would slow down to the speed of that one. AA is the SATA auto activate mode. I haven't encountered any problems yet, but found some postings on the web reading that missing AA mode can lead to failures in sleep mode (Actually I do not use sleep mode).
    That's a first oversight of the system.

    Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
    Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
    Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts

    Einmal editiert, zuletzt von datadigger ()

  • System check and harddisk/raid speed tests in a nutshell


    A collection of useful commands for inventory and speed tests


    Part II: Block devices and raid check


    1. Block devices


    For checking the block device id's use blkid:


    The output shows details about the block devices (Harddisks and raids).


    2. Raid status


    To check the status of a raid there are two commands you can use. A short oversight gives cat /proc/mdstat:

    Code
    root@pronas5:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdf[3] sdc[0] sdd[1]
    209649664 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]


    To go into detail use mdadm --detail /dev/md<number_of_raid> (Usually 127 or 0):

    Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
    Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
    Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts

  • System check and harddisk/raid speed tests in a nutshell


    A collection of useful commands for inventory and speed tests


    Part III: Speed tests


    1a. Check with dd to tmpfs


    Attention: You can alter the values for the file size and the number of counts, but take care about the free space on your
    devices. A good idea is to check the free space with df -h before you change these values.


    Now let's do some speed tests and start with dd like thisdd conv=fdatasync if=/dev/sda of=/tmp/test.img bs=1G count=3:

    Code
    root@DATENKNECHT:~# dd conv=fdatasync if=/dev/sda of=/tmp/test.img bs=1G count=3
    3+0 Datensätze ein
    3+0 Datensätze aus
    3221225472 Bytes (3,2 GB) kopiert, 23,865 s, 135 MB/s


    This command "copies" a file of 1G size from sda to /tmp/test.img (You have to remove this file afterwards - take care: Count=3 means that the target file will have a size of 3G, more will probably fill up the /tmp-fs partition).
    In other words, this shows the transfer speed from sda (A raid harddisk) to tmpfs (Virtual memory). conv=fdatasync let dd wait until the whole file transfer is finished, so it gives some kind of raw speed. Count=3 let it run three times so it makes sure that caching does not falsify the result.
    You can test that with single disks or the whole raid.


    1b. Check with dd to the boot drive


    Change the target to a directory inside of rootfs (I.e. /home).
    dd conv=fdatasync if=/dev/sda of=/home/test.img bs=1G count=3:


    Code
    root@DATENKNECHT:/home# dd conv=fdatasync if=/dev/sda of=/home/test.img bs=1G count=3
    3+0 Datensätze ein
    3+0 Datensätze aus
    3221225472 Bytes (3,2 GB) kopiert, 71,7702 s, 44,9 MB/s


    You see that this is much slower, but this gives you a real speed result between a data drive and the boot drive.


    2. Throughput of CPU, cache and memory


    A test for checking the throughput of CPU, cache and memory of the system _without_ disk reads is hdparm with the parameter -Tt hdparm -Tt /dev/md127:

    Code
    root@DATENKNECHT:/tmp# hdparm -Tt /dev/md127
    /dev/md127:
    Timing cached reads: 2224 MB in 2.00 seconds = 1112.99 MB/sec
    Timing buffered disk reads: 1124 MB in 3.00 seconds = 374.60 MB/sec


    Run it several times on a system without activity.


    3. dd to /dev/null


    Next transfer speed test is a dd-test again, this time without creating a file but to /dev/null so it gives a speed result from every single disk without copying to another one with dd if=/dev/sda of=/dev/null bs=1G count=3:

    Code
    root@DATENKNECHT:/tmp# dd if=/dev/sda of=/dev/null bs=1G count=3
    3+0 Datensätze ein
    3+0 Datensätze aus
    3221225472 Bytes (3,2 GB) kopiert, 17,7039 s, 182 MB/s


    The conv-parameter from the example above is not allowed when using /dev/null.
    The same using the whole raid:

    Code
    root@DATENKNECHT:/tmp# dd if=/dev/md127 of=/dev/null bs=1G count=3
    3+0 Datensätze ein
    3+0 Datensätze aus
    3221225472 Bytes (3,2 GB) kopiert, 8,41633 s, 383 MB/s


    Ok, I think you got it now. The values above are IMHO not bad, the Asus board was a good decision. You may encounter other values even if the tests run on similar hardware, but this is nothing to worry about - every single system has it's peculiarities starting from the PSU power over bios and HDD firmware versions, cabling and other conditions, even temperature differences can lead to different results.


    I would be happy if this is somehow useful.



    Questions / Problems / Discussions
    System check and harddisk/raid speed tests in a nutshell

    Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
    Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
    Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts

    3 Mal editiert, zuletzt von WastlJ ()

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!