To “B”, or NOT to “B”…does it really matter?

For many managers, directors, and C-level folks in software development, ‘Plan A’ is to automate all testing, and ‘Plan B’ is to sustain a mix of automated and manual testing.  With that mindset, anything less than having all testing automated is consider failure, or concession.  But is it?…

In some cases, manual testing is a regulatory or organizational requirement.  In other cases, it remains in place to test the “look and feel” of an application (something for which automation is ineffective).  Even where it is feasible to fully replace manual testing with automated testing, a more reasonable approach is to accept manual testing as part of the process, and implement changes that reduce the effort involved.

Two things that represent the greatest cost of manual testing is:  testing to ensure that a new build is truly “ready to be tested” (environment validation or smoke testing), and identifying or creating test data for each test.  These are often manual tasks, and are impediments to test execution.  By addressing those impediments, the cost of manual testing will be immediately reduced, while improving the reliability of builds promoted to a ‘Test’ environment and laying the foundation for future test automation.

Expanding from that point will, over time, replace the most costly aspects of manual testing, re-purposing the role of those are executing manual tests.  Rather than replacing manual testers, you’ll be facilitating transition of those resources so that they remain highly valued, and remain integral to maintaining a high velocity of software development.

One path of testing maturity that I often advocate is:

  • Introduce a fully automated ‘Environment Validation’ suite — ensure each web page or web service endpoint is responsive, AND includes all properties that automated tests will be verifying
  • Implement a test data provider — accepts a request for data in a desired state, then returns existing data that is in the requested state OR creates the data if necessary
  • Introduce a fully automated ‘Release Validation’ suite — verify all system and business-critical functionality, answering the question, “Is there any unacceptable risk to revenue or reputation?”
  • Introduce a fully automated ‘Sprint Validation’ suite — verify all acceptance tests for the current sprint

By that point, it should be possible to identify the gap between the manually executed regression suite and the automated tests.

This approach makes irrelevant the original question of whether to go to full test automation or not, putting the focus rather on improving the efficiency and effectiveness of the current testing effort.  At the same time, a foundation for fully automated testing will be laid.

Reconciling to this mindset does require reconsidering the purpose of test automation in your organization; but from my experience it results in less anxiety, greater productivity, greater efficiency, and more reasonable expectations of test automation.

Be reconciled!

Leave a Reply