Why Java applications fail to scale linearly: A simple explanation
By Cameron McKenzie
Are you looking to scale out your Java application? Well, then I've got some bad news for you: Your Java application probably won't scale. Java applications just don't scale well. Get used to it because it's a simple fact.
Now I know what you're thinking. You're thinking that this simple assertion is wrong; after all, all of those Servlet and JSP based applications that you wrote ten years ago had no problem moving onto a handful of JVMs that were spread out over a horizontally and vertically scaled cluster. And indeed, those types of applications scaled up and scaled out just fine. It’s the bigger Java applications that get scaled to a large number of processors that run into problems.
The Java scalability problem
You see, the Java scalability problem doesn't rear its ugly head until you start moving into big, multi-processor systems. Java can scale just fine to two or three, or maybe even five or six processors, but once you move beyond the single digits, your scalability won't be linear, and if scalability isn’t linear, it really isn’t scalable at all.
If we have two processors running, we want our programs to be twice as fast as when it runs with only one processor. An eight-processor system should be twice as fast as a four processor system, and so on - that's linear scaling. There really isn't any system that is perfectly linear in its scalability, but a linear scale is always the objective. Sadly, Java programs don't even come close to scaling linearly, and the reasons why all boil down to locking.
Java and linear scalability
Objects in Java are the basic building blocks of the language. Objects contain data, and sometimes objects point to data contained within other objects. As Java applications become more and more complex, and as more and more lists and arrays hold objects and data, and as more and more lists and arrays hold pieces of data that indirectly point at other pieces of data, keeping track of what is where and who is pointing at what becomes incredibly confusing.
To maintain any semblance of order, concurrent Java programs use threads that lock pieces of data when a part of the program (let's say Method A) needs access. If another part of the program (Method B) needs access to that same piece of data, Method B is locked out and needs to wait for Method A to finish and unlock the data.
The more processors your throw at a Java program, the more often these data collisions happen, and you eventually hit a point where the whole JVM is simply bogged down in the process of managing and manipulating locks. A point eventually gets reached where adding another processor is about as productive as adding another mythical man to the development team.
"Java provides threads and locks as the primary parallel programming model during operation. It is very simple, it is very straight forward, but it has an obvious cost model. It turns out that in practice in large programs, it becomes very difficult to get it right unless you add locks everywhere. The result of this is that your program does not scale because your threads are all competing on locks, often competing when there isn't even an underlying data contention." So said Cliff Click in an earlier discussion on the topic. "JVMs have made huge strides in making locks cheaper and cheaper and cheaper, but ultimately, they cannot make the cost go away."
Scaling Java may not be a big problem
Making Java applications scale does not need to be a point of consternation. It won't be necessary to throw away all of those Java applications we've built in the past and replace the whole thing with a .NET framework. The good news is that very few applications ever push the limits of an eight or sixteen processor machine.
Remember, the problem occurs when a large number of processors are fighting over the same few pieces of data. If you've got a number of small, separate, self-contained, individual applications deployed to your multi-core, multi-processor server, you're never going to experience this particular problem with regards to scalability. Any one individual application won't likely put a significant load on any more than a single, or maybe even two of the processors supporting the system, which mean deadlocks and data collisions won't likely be a problem.
So for the vast majority of users, the problem Java applications have with achieving linear scalability is simply a trivial point of interest - something to be brought up at dinner parties or around the water cooler, but not an issue from which an entire enterprise architecture needs to be redesigned. But what if your Java application does hit the limits of linear scalability? Well, there are solutions to the Java scalability paradox, and these are discussed in the second part of this article.
31 May 2012