Optimizing Java Applications

Without the right tools and techniques, tuning for performance can be unbearably difficult, yet you can't afford to turn your back on the process. It's absolutely critical. With the size and complexity of most Java applications, it's getting much more important to get an early handle on performance.

Use sound practices to make code-and development teams-more efficient

Without the right tools and techniques, tuning for performance can be unbearably difficult, yet you can't afford to turn your back on the process. It's absolutely critical. With the size and complexity of most Java applications, it's getting much more important to get an early handle on performance. Gartner estimates that only 14% of applications meet "all measured and tested response time estimates."

Let's consider a true scenario. A development team works on a complex, proprietary application designed to automate the scheduling and loading of railroad freight trains. After the development cycle, they performance test the application during a planned tuning cycle, and then they begin to get into trouble:

  • They find that they must make sweeping design changes to satisfy performance problems, which will result in major architectural upheaval-they've waited too long to test.
  • They waste time on a cache that has no bearing on performance-they've set their priorities incorrectly.
  • The team discovers memory leaks, but the manual detection and troubleshooting cost the team weeks-they've neglected investment in good tools.

When all is said and done, the project is delivered $240,000 over budget and more importantly three months late. This story is repeated with increasing regularity across many J2EE development shops. The problems are severe, and also frequently preventable. In this paper, we'll talk about some ways to move performance out of an inefficient box at the end of the cycle, and into the fabric of your everyday routine.

What's Wrong?

Before looking at effective ways to tune, let's consider some common tuning practices that don't work. At least one of these potential traps plague the vast majority of Java developers:

  • Neglecting tools. Without the right set of tools, you don't have a fighting chance. Solving a single tricky memory leak without supporting tools will often cost your employer more than the price of a good performance tool, based on labor and delayed deployment costs. Throw in abilities like profiling and threading support, and the right tools go from the realm of luxury to essential, but developers and managers alike neglect this area with frightening regularity.

  • Setting priorities incorrectly. Work on the right performance problem at the right time, and you'll be a highly effective tuner. Many developers still try to optimize every line of code as they go, a tremendously inefficient philosophy. Others neglect performance until the end of the cycle, when critical structural changes are all but impossible. We can improve dramatically on both of these approaches.

Many managers fear placing too much emphasis on performance too soon, yet this fear is often misplaced. Proponents of agile development processes have successfully demonstrated the benefit of continuous integration, so much so that it is working into mainstream development methods. Many programming books like Kent Beck's best-selling "eXtreme Programming Explained" provide hard evidence supporting continuous integration, claiming that integration costs increase exponentially with delay.

When you think about it, poor performance is often a symptom of an integration problem. Gone are the days when performance optimization meant tweaking a couple of lines of code in a single method. Today, performance relates primarily to how components work together, across frameworks, applications, components and distributed systems. Threading, memory management and distributed communication are the new realms of performance tuning.

Said another way, performance has largely become an integration issue, and should be attacked at the earliest possible moment. If you can attack performance problems before they have a chance to fester, you dramatically reduce the time that it takes to solve them. With time, they grow in scope and complexity. A better performance process, then, should let you detect problems as early as possible. At first, this goal can sound impractical, but many teams already have the right tools in place to get the job done.

How Should You Tune?

The fundamental key to a sound performance process is setting priorities effectively, so automation is critical to your success. The best way to automate is to expand your regular unit tests to also include performance tests. JUnit is an open source automated testing tool, and is a widely accepted framework for unit testing.

If you're already using a tool like JUnit, you're halfway there. In short, your system will automatically measure performance and notify you when a performance test case fails. You can then immediately fix the problem at the time it's introduced. To inject performance tests into an existing test suite, you merely need to do the following steps:

  1. Define performance requirements.
  2. Add your automated performance test suites.
  3. Make the fix.

If you're like many developers, you probably invest most of your effort on the third step, even though you haven't nailed your specific problem. We shift most of that effort into building reusable test cases that repeatedly warn you when performance wanders out of an acceptable range. Let's address each of these steps in more detail.

Step 1 - Define Performance Requirements

When you gather requirements for an application, you'll often be working from a use case, which is a task that a user wants to accomplish with your system. For each use case, you should make sure that you understand any performance requirements. Does your scenario require immediate response to an end-user, or a batched set of reports with no hard performance requirement? Is the user in an internal call center with a customer, or possibly an Internet customer? How many simultaneous users are expected on average and at peak times? By gathering performance criteria for each major use case, you'll have some hard data that you can use to manage the rest of the process.

Remember, no application is perfect: you can always optimize. For efficiency, you should do precisely enough to meet the requirements of your customer. You may well make a conscious decision to over-deliver, anticipating possible future scalability requirements. You can always build in more stringent requirements into your test cases.

Step 2 - Automate Performance Testing

Many development teams use the Agile programming practice of writing test cases before the rest of the code. The benefits are well documented, so we'll not repeat them here. You know you're done when you satisfy the test cases. It's the responsibility of each developer to code and maintain a working effective test suite.

You should add performance requirements to your test cases. If you use JUnit, you can use a JUnit extension like JUnitPerf (discussed below) to add your performance tests. By adopting automated test suites, you can safely work performance optimization into your everyday routine, without the danger of setting priorities incorrectly. Remember: the purpose of your performance tests is to warn you when your code's performance creeps out of accepted tolerances. When that happens, you can drop what you're doing and drill down with your performance tool to profile and solve your performance problem. With this process, you will greatly reduce your dependence on any special tuning process or team. Poor performance can be handled just like any other bug.

Test scalability and response time

We'll hit the bare highlights of performance test construction here. You want your test cases to handle simple measurements for response time, and complex measurements of scalability. Response time is simply the time between the start and end of a request. Scalability measures the number of requests that your application can handle simultaneously. These tests measure each type of performance.

  • Timed tests check the response time of a single method or use case. You can build your timed tests to run a single test, or run multiple tests, and take an average. You can also build your timed tests to disregard early iterations through a test. Either way, you must make sure that you discard the overhead for setting up the test. (In most cases, the time is negligible, and it's often measurable.)

  • Load tests check the scalability of the system under the load of many simulated users. You'll probably want to accumulate several timed tests into a test suite. Your tool should let you simulate the user community, whether they are randomly distributed or evenly spaced.

You may want to run your tests at different intervals. Basic timed tests can run with the rest of your JUnit suites. Since load tests take longer, you may want to run that test suite less frequently. You may also wind up adding some requirements that test resource usage over time, like memory. Some of the better performance test tools allow you to automate this type of testing as well. If yours doesn't, periodic manual tests will do.

Run your tests throughout the cycle

Rather than running all of your tests at the end of the cycle, this process calls for running your automated tests throughout the cycle. Most teams have their JUnit test cases create a test report after every automated build. This way, you've got an accurate picture of what your system performance looks like at any point in time. Remember to address performance as you go. Your use case is not completed until all of your test cases successfully run.

You will find that adding performance tests to your automated test suites is an incredibly liberating experience. By coding the test cases, you'll be able to run them throughout the development cycle-not just the end. You'll catch problems early as they're introduced, and while they are easy to solve.

Step 3 - Make the Fix

Now that you've identified the areas of your application that need attention, you can finally optimize the scenario. The role of your test suite is to identify what you need to fix, when you need to fix it.

  • Instead of trying to optimize everything, you optimize exactly what needs it. Try to choose simple algorithms that work, and spend time on optimizations only when they don't meet your performance requirements.

  • Instead of waiting until the end of the cycle, you improve performance on an every-day basis, when problems are introduced. You do precisely the right amount of work on performance-nothing more, nothing less.

  • You can make design improvements earlier in the cycle, when user requirements dictate that you do so.

The result is gratifying. Your customers get faster applications, in less time. The seeming paradox is that by adding performance work into your everyday routine, you will become much more efficient overall.

What do you Need to Succeed?

You probably already know that good performance tools are the secret to effective optimization. It's just too expensive to do business any other way. Currently, Newport Group estimates that the average amount of time required to solve a performance problem, from the time that the problem ticket is opened, at 25.8 hours. With compressed budgets and increasing expectations, that's insanely expensive. To have any hope of success, you've simply got to beat the industry's average, and find the problems faster, and before they leave your protected development environment.

Invest in tools that support automation and optimization

Our process defined two major phases of optimization: identifying the problem, and fixing the problem. Both phases need supporting tools. You'll need specialized tools to help you manage performance, and you'll want tools to help you identify problems at both development and post-production time:

  • You'll need a tool that supports performance unit tests. If you're using JUnit to automate your test cases, you can use a simple extension like JUnitPerf to implement your performance tests, available from Clarkware's web site at http://www.clarkware.com/software/JUnitPerf.

  • You'll need a tool to profile and debug your performance problems. For example, Borland's Optimizeit Suite supports your optimization after you find a problem. Your tool should be able to quickly profile your code for a scenario, and drill down into problem areas like memory management and thread interaction.

  • You'll need operations tools, like Precice's InDepth and Insight, that allow you to keep measuring an application post-production. This tool fills the role of automated unit tests, but after you've moved into production.

With so much data and experience clearly available about the cost of performance problems and the impact of optimization tools, it's amazing to see how many organizations neglect to properly equip their development teams. As frameworks grow in complexity, that practice will simply have to change, as teams won't be able to remain productive without them.

At its core, tuning is effectively gathering and using information. Let's look at three specific areas where an optimization tool can make your life easier: profiling code, solving memory leaks, and optimizing threads.

Start with a profiler

After your test cases have shown that you have a scenario with a problem, your first step is to profile the scenario. Your tool will add up the cumulative execution time in each method of your code. As you know, a profile is not a flat list of methods: each method usually calls many others. Tools with good profilers let you drill down graphically, and find the coding sections that are requiring the most time. For example, if you find that an unhealthy amount of time is spent dealing with string manipulation, you know that you should tweak the algorithms that build or manage strings. If, instead, you find that most of the time is spent on communications-related activities, then you've probably got to reduce the number of network round-trip communications, possibly through a cache or a session facade. In any case, the profiler alone can often provide enough information to help you rapidly localize and solve your problem.

Solve memory leaks

Some problems are especially hard to track manually, because they are outside of an application's sphere of control. The garbage collector and heap are usually hidden to the application, but performance tools designed to find memory leaks can watch heaps and trigger garbage collection, giving you enough control to solve the problem.

Java virtual machines automate memory allocation, and use a technique called reachable objects to do garbage collection. An object is reachable if you can access it directly from some thread, or through a reference in another reachable object. When an object with a long life span keeps an unwanted reference to an unused object (like a static variable), you've got a leak. To solve memory leaks, you've got to be able to see the heap: the pool of all allocated objects. There's no good way to do so without a tool. Even so, finding a leak can be tedious, but by analyzing two heaps after some time has elapsed, some tools like Borland's Optimizeit Suite can find many leaks automatically by comparing a current heap to a snap shot of a past heap, (see figure 1.)

Figure 1. The better performance tools can find leaks by comparing two heaps.

Untangle threads

With memory management, most developers claim that threading issues are the most difficult performance problems to solve. Sometimes, your development environment will have a suitable debugger, but more often, it takes a specialized tool. The problem is that when you run multiple threads in a traditional debugger, you change the behavior of the application. You need to be able to visualize the threads, as they would operate in a production environment, and see where thread blocking and contention hurt you. Look for features that help with these tasks in your thread tool:

  • Resolving thread contentions. When performance issues occur in applications with multiple threads, often the problem is resource contention. Multiple threads often compete for the same resources, leaving many threads to block. A good debugger helps you to visually see where too much contention is hurting you, and where some tasks are taking too long and impacting other threads.

  • Solving deadlocks. After an application deadlocks, usually you can't gather any more information. A thread debugger like the one in figure 2 can help you pinpoint exactly which thread is waiting on which resource. Note that the debugger automatically identifies both the threads and resources involved in the deadlock.

Figure 2. This thread debugger identifies the culprits in a deadlock situation.

Summarizing Effective Java Optimization

In the end, you don't wander into good performance by accident: you invest in it. In this paper, we've outlined two critical investments. First, a good process, with automated performance tests where possible, will allow you to monitor and optimize your application performance as early as possible, as problems are introduced. Second, a good set of tools is as essential for the Java developer as they are for a carpenter or mechanic. These investments will save you time and money. Finding problems earlier in the development cycle will let you fix them faster, when any design changes are less expensive. Automating performance testing will let you focus on developing code, knowing that your test suite will catch performance problems as they're introduced, making you more productive. And investments in tools will allow you to attack the most critical performance problems with confidence and efficiency. The result is a faster, more cost-effective development process.

References Consulted

  • eXtreme Programming Explained by Kent Beck (Addison Wesley, 2000.)

  • Tearing Down the Wall: AD and Operations Together. Presented at Symposium ITXPO 2002, by Deb Curtis and Theresa Lanowitz.

  • Manage Java apps for premium performance by Peter Bochner in Application Development Trends magazine in the January, 2003 issue.

The Middleware Company is a unique group of server-side Java experts. We provide the industry's most advanced training, mentoring, and advice in EJB, J2EE, and XML-based Web Services technologies. We can:
  • Build experts through advanced, interactive training.
  • Provide on-site mentoring and consulting
  • Guide product or tool selection
  • Jumpstart projects with a package designed to get a corporation up-and-running with server-side Java in a matter of weeks
  • Develop full-scale enterprise applications
  • Develop business and technical whitepapers

For further information about our services, please visit our Web site at: http://www.middleware-company.com/

Dig Deeper on Java performance tuning

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.