Java Development News:

An Architect's Perspective on Application Quality

By Allen Stoker

01 Jul 2006 | TheServerSide.com

Are you trying to achieve a higher level of quality in your organization? What exactly does the term quality mean to you? For some it may mean fewer customer calls, for others it may mean one less drive to the office to restore a corrupted database. Generally speaking, the word quality can be defined as "A measure of excellence." Where you choose to seek that excellence is completely up to you.

While quality is often directly associated to defects, it is typically not something that can be achieved by implementing a few test cases. In most situations it has deep roots in architecture and process. Many would say that these are big project issues and excessive rigor is not needed for a simple utility program, but I would challenge that assertion. It is difficult to develop complete and accurate documentation of requirements, and design, and code, and test cases, and build environments, and, and, and... I just want to get this code done! Don't fool yourself into this trap. While you may choose to skip any or all of these steps they still apply to the smallest of projects. It's easy to justify the lack of a formal design document because you have it all in your head, but you may find the details escape you six months later when you need to make a change.

As you make decisions about what processes you will implement on your project, be sure to base those decisions on a well-defined set of goals. A primary key to reaching your goals is to clearly define them and communicate them to your team. Many organizations strive to improve quality, but never really understand what that means. Since the term quality has both personal and situational relevance, don't bother looking in the dictionary. Here are some examples of quality-focused goals to think about:

  • Reduce the number of defects found in functional testing
  • Reduce the number of help desk calls from your customers
  • Shorten the duration of the QA testing cycle
  • Receive early notification of failures (maybe before they are seen by the users)
  • Handle 2x the user load with consistent response times
  • Keep environment available 24x7
  • Reduce the number of user actions to complete a process

I selected this brief list because each goal in this list has a different impact on the project team, on the team processes and even on the application architecture. Each of these goals may or may not have relevance to your objectives, and you can choose to distinguish the goals you choose with varying importance. The better you define and document your goals, the easier it will be to make tough decisions later.

Let me address two of the goals at a deeper level and talk about implementation. I will explore additional topic points in future articles.

Unit Testing

If you wish to reduce the number of defects found in functional testing, you must first determine the root cause of those defects. Always remember that quality issues require diligent investigation to be sure that you are addressing the true root cause. It is common to blame excessive functional defects on the development team, but ask some questions about the overall development and testing process:

  • Are the developers receiving clear design documents?
  • Are the designers receiving clear requirements?
  • Do the test cases align with the requirements and design?
  • Are the test cases being executed properly?
  • Did the developer run the test cases?

These questions may reveal problems that can typically be addressed by collaboration and communication. Did you include the requirements, design, testing and development resources in a meeting to talk about the unit before diving into code?

Unit testing is often singled out as a prime location to shoot the silver bullet, but we must be careful to evaluate our goals. Thanks to open frameworks like JUnit, unit test cases are easy to construct but come with a maintenance price. Ask yourself a goals-oriented question like "Do I need to unit test everything?" Surprisingly, the answer may be no. One easy way to help answer that question is by introducing a coverage analysis tool. By targeting your effort based on good analysis, you may find that the introduction of only a few simple tests can dramatically increase the overall stability of the application.

Unit tests that attempt to do too much may be complicated and fragile due to application refactoring, dependency on data and other such issues. You may find that mock-objects allow you to break free of your database unit testing constraints, but may land you a requirement to keep a mock-object expert on staff to support the rest of the team. Was that really your intention?

Once again, be sure to revisit your goals and ensure that your efforts will help you meet them. Applying the right level of unit testing versus functional testing is an art. Some test cases are a better fit for the functional testing area. By defining the criteria up front you can reduce the occurrence of duplicate test cases and avoid overly complex unit test cases.

Architecture also plays heavily into the unit testing space because well architected units are typically easier to test. Likewise, difficulty developing a unit test may be your first indication of architectural issues.

Real-Time Monitoring

This is one of my favorite soap boxes. If you have a goal like receiving early notification of failures, this must be architected into the application from day one. Before you start flaming me, someone will read this and say "I can just use log4j and an email appender" and my response is: What categories will you alert on? What if log statements don't exist at the right locations in the application? And so on...

Regardless of how you choose to monitor your application, the real effort is in the planning.

  • What areas should be monitored?
  • What are the discriminating characteristics (speed, count, etc)?
  • How will I adjust the alert levels once deployed?
  • And of course...How important is this?

SOA is becoming more prevalent each day and the opportunity for runtime failures increases dramatically as you begin to rely on external services. Unfortunately, the importance will become clear the first time your system goes down and a technician says "Is it our code, or the service?" I highly recommend that you provide monitoring hooks to any external resource your application is dependent on. You should also be verbose in your output including full stack traces and other details that may assist you in identifying root cause quickly.

Summary

Despite some of my prior comments, I do believe that there is a universal understanding of quality. People want to use applications that do things right and don't break. The problems arise when we continue to build applications with increasing levels of complexity and we don't effectively plan to manage that complexity. End users equate simple usage and minimal failures with quality. Each of the quality-related goals I noted earlier can be addressed with a variety of implementation approaches, but there will always be a need for a battery of testing to validate the results.

  • Unit testing is an early stop point to fix problems before they impact a broader audience, but typically have little impact on the end users of a system.
  • Functional testing is the validation that users will get the expected results. Unfortunately, functional testing is the area most impacted by poor or absent unit testing. Excessive unit issues could also lead to failure to complete functional testing or require more time or staff to complete.
  • Performance testing is most commonly overlooked (in my experience). There seems to be a perception that if a screen will run in two seconds on a developer's machine, it will be fine when 20,000 users hit the screen in production.
  • Stability and endurance testing are also commonly overlooked and could be the most important areas to test. If your application can't run for more than a day without crashing, who cares about the response time?
  • Real-Time monitoring is a critical aspect of ensuring long-term quality. Most applications are deployed to perform work that cannot be 'simulated.' This confers a requirement that live transactions be monitored and reported on. There are tools that will provide runtime monitoring, but they may require extensive knowledge and setup efforts, and may require application hooks that need to be considered from the beginning of the project.

Quality begins in the team - not the application. Proper planning, communication and processes are essential to any successful project. Projects that lack these fundamentals will likely produce problematic applications. I'm a firm believer that large teams with diverse skill sets need a Quality Architect - a highly skilled technical person on your team who has no assignment but to support or 'enable' the other team resources. Such a resource can mean the difference between project success and failure.

About the Author

Allen Stoker is currently a Technologist with Liberty Mutual Insurance in Indianapolis and oversees the architecture of several applications in both the J2EE and Mainframe environments.