CPU Performance Monitoring
In CPU performance monitoring, we are going to be using several counters:
Generally, CPU performance monitoring is straightforward. You need to start by monitoring Processor: per cent Processor Time. If you have more than one processor, you should monitor each instance of this counter and also monitor System: per cent Total Processor Time to determine the average for all processors.
Utilization rates consistently above 80-90 per cent may indicate a poorly tuned or designed application. On the other hand, if you have put all the other recommendations of this book into use, they may indicate a need for a more powerful CPU subsystem. In general, I would spend a little bit of time analysing the applications before immediately going out and buying three more processors.
Spending this time experimenting to discover CPU performance problems and correcting them through software improvements will often keep you from just spending money on a more powerful CPU that only covers up poorly written software for little or no time.
If you do see high CPU utilization, you will then want to monitor Processor: Per cent Privileged Time. This is the time spent performing kernel level operations, such as disk I/O. If his counter is consistently above 80-90 per cent and corresponds to high disk performance counters, you may have a disk bottleneck rather than a CPU bottleneck.
What about SQL Server? Processor: Per cent User Time measures the amount of processor time consumed by non-kernel level applications. SQL is such an application. If this is high and you have multiple processes running on a server, you may want to delve further by looking at specific process instances through the instances of the counter Process: Per cent User Time. This can be very useful for occasions such as when our operating system engineers installed new anti-virus software on all our servers. It temporarily brought them to their knees until we were able to determine the culprit through analysing Process:Per cent User Time for the anti-virus software instance.
Disk Tuning and Performance Monitoring
Begin disk performance monitoring by looking at the following counters:
- PhysicalDisk: Per cent Disk Time
- PhysicalDisk: Current Disk Queue Length
- PhysicalDisk: Avg. Disk Queue Length
Applications and systems that are I/O-bound may keep the disk constantly active. This is called disk thrashing.
You should always know how many channels, what types of arrays, how many disks are in each array, and which array/channel your data and transaction logs are located on before you start thinking about disk performance tuning.
The PhysicalDisk: Per cent Disk Time counter monitors the percentage of time that the disk is conducting check the PhysicalDisk: Current Disk Queue Length counter to see the number of requests that are queued up waiting for disk access.
It is important at this point to be familiar with your disk subsystem. If the number of waiting I/O requests is a sustained value more than 1.5 to 2 times the number of spindles making up the physical disk, you have a disk bottleneck. For example, a RAID 5 configuration with seven spindles/disks would be a candidate for disk performance tuning should the Current Disk Queue Length continually rest above 12-14.
To improve performance in this situation, consider adding faster disk drives, moving some processes to an additional controller-disk subsystem, or adding additional disks to a RAID 5 array.
Most disks have one spindle, although RAID devices usually have more. A hardware RAID 5 device appears as one physical disk in Windows NT PerMon or Windows 2000 sysMon. RAID devices created through software appear as multiple instances.
WARNING: The Per cent Disk Time counter can indicate a value greater than 100 per cent if you are using a hardware based RAID configuration. If it does, use the PhysicalDisk: Avg. Disk Queue Length counter to determine the average number of system requests waiting for disk access. Again, this is indicative of a performance problem if a sustained value of 1.5 to 2 times the number of spindles in the array is observed.