Discussions

News: PushToTest will integrate Jaspersoft iReports into the upcoming TestMaker 6

  1. PushToTest will replace the current TestMaker results analysis engine with Jaspersoft iReports in the upcoming TestMaker 6. Developers, testers, and IT ops people will have access to thousands of charts and reports.

    TestMaker 6 is under development now. Beta is expected next week with release candidates in January 2011, and a GA version in early February 2011.

    Details on the new partnership are at:

    http://www.pushtotest.com/latest/pushtotest-and-jaspersoft-to-solve-web-application-performance-bottlenecks

    Threaded Messages (4)

  2. Please performance (load/stress/...) testing tools don't resolve performance problems - absolute nonsense.

    If you are lucky you might actually find a decent one (I have not) that does not have its own sets of performance problems in generating load and be able to create a situation in which using other performance monitoring/measuring/metering tools you are able to pinpoint the underlying cause of mile-in-the-sky observations made.

    Look if you hit your head with a hammer I can assure you that a bump (in a chart) will appear sooner after. That's an observation. But that does not tell me the underlying causes that bring out that bump and to some degree how it might differ across test subjects. I can chart bumps all day if I am stupid to repeat the same experiment again and again but that is not going to help me understand its cause other than sheer stupidity in the first place.

  3. Reading the bumps on my head[ Go to top ]

    Test tools that generate load against the front end of an application and then report the bumps have their usefulness. Sometimes the test surfaces a performance bottleneck. I am all for these kind of bump reports.

    There a lot of monitoring tools to watch the backend of an application: Glassbox uses AOP to observe thread deadlocks, memory leaks, and slow Hibernate-based database queries, Spring compiled classes come with instrumentation for memory, threads, and object processing timing, Nagios and Hyperic feature monitoring agents, and commercial tools like dynaTrace watch for performance bumps.

    TestMaker's PTTMonitor module is a gateway to Glassbox, Spring, Nagios, dynaTrace, and Hyperic statistics. PTTMonitor logs to an RDBMS and the TestMaker Results Analysis Engine uses Jasper Reports to show correlation between the backend measurements and the load test running on the front end.

    No reporting tool is going to be as insightful as a good Java engineer. TestMaker with Jasper Reports is a good help to surface a performance bottleneck and get an engineer to look in the right direction for the root cause of the problem.

    -Frank

     

     

  4. point proven[ Go to top ]

    Nearly all of the monitoring tools you mentioned are completely useless (high overhead, limited coverage,...) which proves my point that those building performance test tools should first learn how to performance engineer their own software and measure they tools they integrate with.

    http://williamlouth.wordpress.com/2008/12/16/a-healthy-mistrust-of-java-based-stressload-test-tools/

    Performance test tools are good at uncovering low hang fruit when the developers have done pretty much nothing in this area but when it comes to realistic production load simulation they fail because (1) they have not got sufficient and accurate data from production, (2) they can't simulate and measure the load itself, .....

    We are called into resolve many challenging performance problems in large and complex production environments (after others have failed miserably) and in nearly all cases they had performance testing solutions. So much for root cause analysis. What they did not have was quality (accurate, relevant, extensive) information on the execution behavioral patterns and workloads which seems to me to be a requisite for any sort of performance testing.

    Performance testing is but one activity in a software performance engineering process.

    http://www.jinspired.com/solutions/xpe/

  5. Calibration testing[ Go to top ]

    Load and stress testing tools are like desktop publishing software. It is pretty easy to create a crappy newsletter. Your "Healthy Mistrust" blog highlights one of the downfalls possible. Your blog includes screen shots of performance results from one of the proprietary test tools. I do not understand the chart - it looks like the legend is missing. The rows of data look like a Geiger Counter. Maybe that's your point? 

    All test tools users should be able to measure the tool's performance. I call this calibration testing. A calibration test identifies the scale and performance of the test tool. I publish a how-to article on calibration testing at http://www.pushtotest.com/version-55/calibration-methodology.

    In the performance testing community there is a renaissance of interest in tools, processes, and best practices to surface performance bottlenecks and functional issues at load. I hope that the performance testers learn from the developer community how to share patterns, antipatterns, and test code. Many of us are hosting Workshops and Meetups (Danny Faught, http://tejasconsulting.com/blog, the Selenium folks, http://www.seleniumconf.com/, myself, http://workshop.pushtotest.com) to make this happen.

    -Frank