Home

News: How to Test Application Throughput: Keep it REAL

  1. Often we see the workload to a web application measured by throughput. It’s a way of quantifying the volume of requests/responses in relation to time. Transactions per second or TPS is the most common ratio used. A performance test plan usually contains certain throughput goals. The “GO or NO GO” decision for rolling out a new release or architectural change relies heavily upon a web application handling a certain TPS. Management wants a “Pass” stamp, but it’s your job to make sure that the achieved TPS is indeed realistic – not an illusion of phony numbers. My advice is to “keep it real” by generating workloads that represent all the true characteristics of production. Faking it leads to false positive test result: a certain TPS was met and the Pass stamp was awarded, but the conditions were unrealistic. You need to design realistic tests to properly evaluate whether or not the web application can achieve the set throughput goals.

    Taking several factors into consideration (user profiles, accurate number of users, transaction mixes, dynamic data, think times, bandwidth simulation, behaviors, etc) sets the stage for creating realistic throughput. Check out this post to find out how to "keep it real" by incorporating these factors into your "GO or NO GO" decision.

  2. Im not sure I could disagree with this article more, lets ignore the fact its a blatent marketing attempt and surely has no place of TSS?  Attempting to replicate real life user scenarios is what we were all trying to do early to mid 2000's with the likes of loadrunner.  Given the putut of performance testing is to tell you what needs to be fixed up as it is a performance bottleneck or consume of vast amount of resource testing in this manner is a comlpete waste of time and effort.  Take a far more holistic view and use one of the many JVM profiling tools to tell you where the problems lie

  3. What's "real"?[ Go to top ]

    One of the problems here is you may have no good idea of application load patterns until it goes live. Before that all numbers are speculation-based.

    Another one is reproducing those patters in a different environment could be problematic.

    So in my exprience, performance measurements in a test environment prove high-level that nothing is broken seriously. But then, you just need to go into production and get real feedback, ensuring that

    - people, tools and processes are in place to identify and respond to the issues quickly and proactively

    - changes can be rolled back