TSS recently had a discussion with Randy Arseneau, Ted Feyler, and Kathy Harsanyi from DynaTrace, discussing their flagship product, which helps isolate performance problems in Java throughout the entire development cycle. This is a summary of the technology and its value proposition. It does so by using a technology they call PurePath, which allows you to decompose single transactions across multiple logical and physical servers, which may reside on different platforms such as Java and .NET. It does so by leveraging agents that run in a platform, which send data to a DynaTrace server. The data sent to the DynaTrace server consist of the relevant data associated with the transaction – such as method parameters and the originating call. As a result, developers can isolate the exact parameters that caused a performance failure. Here's a sample application methodology used to test the product: a simple application was written, with a servlet front end that called a web service (hosted on the same server instance); the web service called a stateless EJB to perform the service. (The service was a simple mathematical calculation.) At each point, certain parameters were flagged to cause performance degradation – in other words, if the servlet received a second parameter of "2," it would pause for two seconds. If the web service received a parameter of "4," it would pause for two seconds, and so forth, and so on. The goal was to provide "trip points" to provide data for testing degradation. Most calls to the application would work well, but some calls would not (specifically, three values out of the 400 used as a sample set would cause problems.) Under "normal circumstances," even finding the performance issue would be a problem. After all, over an hour's run, only a few calls would have an issue; the average response time would be affected by very little. An SLA violation might not even be registered. Assuming an SLA issue was noted, finding the issue wouldn't necessarily be trivial. For this example, the numbers were simple enough that it'd be fairly easy to find the specific values that caused the problem, but suppose the error was something more difficult to find: the entire set of input data would have to laboriously be gone through. However, installation of the DynaTrace software made isolation and tracking very easy. It not only flagged the specific SLA violations, but also showed which component failed, as well as the parameters for each failure - and the entire transaction associated. When the EJB "failed," then, it showed that the web service was called with parameters 4 and 6, and the EJB was where the problem occurred. Testing for the problem in specific, then, becomes very easy – almost painfully easy. PurePath works by instrumenting standard interfaces (and can be configured to watch other things as well, based on what the user requires – down to specific packages, classes, methods, and interfaces.) The agent sends data asynchronously to the DynaTrace server, so performance normally isn't affected much – although it's certainly possible to oversample! That said, the ability to customize profiled methods is quite important - imagine a situation where JAAS is causing a performance problem, but JAAS isn't a "normal issue." The ability to customize the PurePath tracing is critical to being able to find out that JAAS is at fault. DynaTrace isn't a magic bullet by any means. It can't replace a coder's familiarity with the architecture being analyzed. Nor is it the only tool for what it does, although it has a fairly unique and useful approach to preserving profiling data. However, it's difficult to over exaggerate how useful the PurePath technology is. Configuring the system to run on an application server is mostly a matter of installing the agent, and configuring the server; the system impact is fairly light. All in all, the product is well worth investigating.