Tip

Automated acceptance testing pitfalls to avoid

How can organizations get the most out of their software testing with the least amount of effort? Here are some best practices on how to sidestep these common automated acceptance testing traps.

There's a common phrase in testing: If you do something more than once, you should automate it. Understandably, in the realm of software testing, where performing repetitive processes is routine, is the perfect test bed for automated acceptance testing. In modern software development environments, with the proliferation of microservices, ubiquity of continuous deployment and often a reactive approach to developing systems, software teams want to deliver features with as much speed and velocity as possible. But if organizations want to do test automation right, they need to avoid these five very common mistakes that software development teams make.

Stability

False fails? Always-red plans? We all know those. Stability of automated tests is one of the most obvious issues, yet most difficult to obtain. Even big players like LinkedIn or Spotify admit to have struggled with it. When designing your automation, you should put extra attention to stability, as it is the most frequent cause of failure and test inefficiency. Writing tests is just the beginning, thus you should always plan some time in sprint for maintenance and revisions.  

When designing your automation, you should put extra attention to stability, as it is the most frequent cause of failure.

UI perspective

The majority of modern applications are Web-based; therefore, the most preferable way for functional testing to be initiated is from the UI perspective. Despite their usefulness, browser tests also have some substantial problems like slow execution time or stability randomness. Since we're in the world of microservices, it's worth it to consider dividing tests into layers -- testing your application features directly through Web services integration, or any type of back end in general, and limiting UI tests to a minimal suite of smoke tests for efficiency.  

Over-mocking

Because of many dependencies over the systems, mocking services become a popular pattern. However, organizations should pay great attention to the volume of the mocked checks. The problem with mocks is that they can miss the truth or simply be outdated, so your development would be running on false assumptions. There's a saying: "Don't mock what you don't own," which asserts that you can stub only the pieces of architecture that you're implementing. This is a proper approach when you test integration with the external system, but what if you want to assume stable dependencies and test only your implementation? Then, yes, you mock everything except what you own. To sum up, the mocking and stubbing strategy can differ depending on test purposes.  

Tight coupling with framework

That's a tricky one. Developers often tend to choose frameworks and tools based on the current trends. The same applies to test frameworks, where we have at least a few great frameworks to use just for a test runner, not to mention the REST client, build tool and so on. While choosing a technology stack, we should bear in mind the necessity to stay as independent from tools as we can -- it's the test automation scenario that is the most important, not the frameworks, so tight coupling between them should be avoided.

Keep it simple

Depending on your team structure, acceptance tests are implemented by developers or testers. Usually the developers are better at writing code, while the testers have a more functional approach -- it's not a rule, though. Automated acceptance testing is not a product itself, but rather a tool; therefore, I would put functionality over complexity. Is your test codebase nearly as big as the tested system? Try to categorize tests according to their domain or type. Does adding new tests require a time-consuming code structure analysis? Sometimes a more verbose, but more readable, code is better for your tests than complex structures.

The worst-case scenario that can happen to your automated acceptance testing is abandoning them due to relatively simple, yet common issues. Time saved by automating simple test cases can be used for executing more complex scenarios manually, and that leads to better software quality and higher employee motivation in general. If you have some other interesting experiences with test automation, let us know.

What common mistakes do you see organizations making in terms of automated acceptance testing? Let us know.

Next Steps:
Adopting automated testing tools
Looking at automated functional testing tools
Automated testing for Ajax applications

Dig Deeper on Software development best practices and processes

App Architecture
Software Quality
Cloud Computing
Security
SearchAWS
Close