Manage Learn to apply best practices and optimize your operations.

Overcoming barriers to Java application performance

Developers and architects at JavaOne 2014 discussed how to improve Java application performance.

Enterprise architects and Java developers are often in search of the killer tweaks that will boost Java application performance. But this focus on silver bullets often leads to the wrong result, argued Ben Evans, co-founder of jClarity, a software analysis tools vendor, at the JavaOne conference in San Francisco this week. Getting real and measurable performance requires a comprehensive and often boring process, he said.

"If you start getting into performance exercises without a clear goal, you end up in a bad place," Evans said. While Java does have some specific characteristics that make it harder to debug and tune than other languages, the biggest problems in performance arise from what Evans calls the social aspects of the development process.

Measuring the way to success

Evans advocates that organization adopt a measurement-driven approach to understanding application behavior under load. "If we are not measuring, we are not doing performance improvement," he said. He specifically eschews fiddling around with algorithms for micro-improvements since this often leads to sub-optimizations of the entire application.

A big part of the performance question lies in knowing the right questions to ask. At the end of the day, Evans said that developers need to frame questions about performance testing in terms of actual use cases, which may change over time. For example, Evans was a technical architect for a gambling site in the U.K. that worked well in the beginning when all of the users were coming from the U.K. But when they opened the service up to German and Turkish bettors, all hell broke loose. The problem was that these new customers hedged their bets in different patterns than U.K. gamblers. The settlement process required a different, more complicated processing workflow, which slowed the server to a crawl. Once they came up with a new performance test that mirrored this new pattern of betting, performance was brought to an acceptable level.

A lot of teams don't do enough performance regression testing, and when they do, they get it wrong. Evans recommends teams think about using Apache JMeter for these kinds of tests. Another good tool is the open source Blaze Meter tool for doing cloud-based testing.

Aligning development and production

Another problem contributing to cognitive biases come from using a user acceptance testing (UAT) environment separate from production. Technical mangers might complain that a full sized UAT environment is too expensive. But not investing in this kind of infrastructure can cost an organization more in software failures.

For example, Evans noted that Java has complex adaptive runtime behavior that modifies itself to run on different classes of hardware. If tests are done on a two-core box, an assumption might be that a 16-core box will be faster. But this is not always the case. Evans has seen cases where larger core boxes with a slower processor speed actually led to reduce performance for single-threaded applications.

Another good practice in developing a production test environment lies in making sure that data used for the UAT matches what is expected in production. Evans has seen cases where UAT testing suggested the application seemed to scale, but did not. The discrepancy occurred because the data for 100 customers could fit into the cache used to access LDAP, but the data from thousands of users on the production system dragged the application to a crawl.

Question tweaks

Evans also advocated against fiddling with the flags used to control Java applications. Many programmers adjust these to optimize a particular component or in response to bad advice without testing their effect on the application as a whole. If it is desirable to make these changes, Evans recommends making a single adjustment, then testing the whole application performance before committing to the change.

Developers often pick up bad advice on tweaks from various development sources without understanding the context of these changes. Sometimes this bad advice can self-perpetuate for years. For example, in the early days of Google, one expert recommended turning off adaptive sizing to deal with limitations in the early JVM implementations. Although this problem was subsequently addressed by later versions of Java, the tip was promoted as curious programmers researching on Google clicked through to the link, thus promoting the bad advice.

At the end of the day, a side benefit of optimizing performance improvement can be improvements in software maintainability and even infrastructure cost reductions, said Evans. For example, Twitter moved from Ruby on Rails to Java as part of its performance improvement process. The maintainability increased greatly since the machine count dropped to one-tenth with the better performance.

This was last published in October 2014

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

JavaOne 2014: Takeaways from Java's biggest conference

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCloudApplications

SearchSoftwareQuality

SearchFinancialApplications

SearchSAP

SearchManufacturingERP

Close