DaveWentzel.com All Things Data
Applications have a tendency to be used more heavily over time as the user base expands for instance. Also, application performance tends to degrade over time as new features are added to the application. Determining throughput capacity of an application through LR testing and by evaluating growth trends over time can ensure infrastructure is adequate for future growth.
So how do you measure throughput capacity and determine the bottleneck device to increase throughput?
One way is to capture PerfMon values over time for trending. Ensure you capture at least CPU util, memory util, and disk util for all servers. It may also be helpful to monitor network loads between servers, but this can be difficult.
For web applications throughput is measured at the web server. This is easiest via the IIS weblogs by looking at "Bytes Sent", "Bytes Received", and "Time Taken". These are NOT enabled by default. A log analyzer (there are plenty on the market, MS has a good free one) can parse out the data to determine throughput during peak and average times.
In LR throughput is measured in bytes/sec, however, it uses the average throughput values which cannot be easily correlated to specific user transactions.