Is it usual for IO Wait to be about 90%+ of CPU usage?
My system is an Twin Quad Core 1.86 GHz Xeon Server board with 16Gb of Ram. I have attached an old Hard Drive to the USB, which I boot from. Then I have 5 1TB Drives of varying ages and makes attached to the SATA 4 Of which are in a RAID 5 Forming the "Big Disk"
On the disk I have 2 NFS shares and a couple of SMB\CIFS Shares
The NFS Shares are for PXE booting 6 Ubuntu Machines. Each of the machines run 6 VirtualBox versions of XP (These will effectively be 10Gb files as far as the NFS system knows) Each of the systems share the same Ubuntu system they boot over the net.
The 36 XP Machines all mount the Samba Shares, and read from one share, process the data and write back to the other share.
The bulk of the Data is in the Samba Shares, the bulk of the access time would appear to be in the NFS shares, but this could easily be because the VirtualBox files are on the NFS Shares
If I reboot 4 or 5 of the XP machines at once, I can easily get the CPU load upto 9+ (With Effectively 8 CPUs)
Is this what I should be expecting and I'm pushing my hardware too hard, or are there improvements I can make to reduce the IO wait, and hence drastically reduce my CPU load.
I