photo-dave - Fotolia
Modern applications are composed of a complex series of components, but as modular applications get developed, assembled and deployed, the overall architecture and application flow too often gets overlooked. But by paying a bit more attention to the details, software developers and application architects can dramatically improve web application performance. All it takes is the time to think about the best way of orchestrating the flow of data through the system.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Consider the implications of request/response cycle
Many web applications application performance issues come down to the I/O in, compute, archive, encode, and then I/O out cycle. Normally developers use coarse grained parallelism where the application logic will go through some activity and then respond back. This has interesting implications. In many cases the application model has to deal with concurrence. It's easy to create applications where multiple threads hit the same backend.
"Another issue is that the application will often deal with contention on data stores," said Martin Thompson, founder of Real Logic. A queue helps management. Modern CPUs have caches that drive performance. The level 1 cache has a latency of about 1 nanosecond (ns). But if the application misses this window it rises to about a 100 ns. If the data has to be pulled from another server, this can rise to over 1000 ns. One of the properties of caches is that they work on a principle of least recently used data. In many cases it's possible to craft the application such that it doesn't fundamentally leverage the best use of the hardware architecture.
Leveraging a pool of threads can make it easy to optimize cache performance, and subsequently improve web application performance as well. But a better approach is to assign a thread per stage where each stage is working with the part that keeps it simple. This is how manufacturing works. The model become single threaded and there is reduced concurrence. This approach is better at dealing with data stores in batches. Queues are used everywhere in modern applications. Thompson recommends making them explicit and then measuring cycle time and service time.
Clean up coupling and cohesion issues
One affliction affecting modern web application performance is what Thompson calls "feature envy." It's easy for this to lead to complex and inefficient connections in application logic where data dependent loads thrash the cache. These bad designs also make it harder to adapt because of the coupling. Thompson said he spends a lot of time looking at coupling and cohesion issues in poorly architected code. He said, "Cleaning up I often get a 30% performance improvement without targeting performance and we end up with fields and classes where they need to be."
It's also important to think about coupling in space and time. "Consider that messaging and protocols are coupled in space and time if communications are synchronous," said Thompson. "I think of synchronous communications as the crystal meth of distributed computing. We get in because it is easy and we end up with this monstrosity. If we are building distributed systems around response time, this will come back to bite us."
Bake telemetry in up front
Developers need clarity on the unit of work that allows cycles to stay in sync. This involves thinking about communications, storage, and improved caching. Good monitoring and telemetry are crucial for finding this coupling. It is important to bake this into the application upfront and take advantage of profiling, counters, histograms, event logs, and tracers. "If these are added later it can be a nightmare," said Thompson.
This is particularly critical with microservices. If developers don't build these capabilities into the code base and then go into serverless and functional computing it will be tough to debug them without debug monitors and profiling.
Another good practice is to avoid averages as a metric. They hide problems and make it difficult to see outliers or distribution of performance characteristics. Many software problems don't have a normal distribution so averages hide things going on.
This also applies with debugging. "The reason many developers go with ASCII programming rather than binary, is that they don't know how to debug binary," said Thompson. This is kind of a cop out. Developers need to think not just about debugging and telemetry, they need to build the tools in so they can do optimizations as well for binary.
At the end of the day, software quality and web application performance is best serviced via simplicity. Thompson said, "Simple code can be reliable, fast, and deterministic. We must drive to being simple and fight the complexity. Less really is more. It is difficult to keep so much in your head when there is a massive code base."
The many ways to test web application performance
The best open source web application performance testing tools on the market
How the network can be the downfall of your otherwise fast web applications