Requirements-oriented Test Case Organization — threatening to crush your quality efforts?

GettyImages-77381803

I’m reminded of a story I heard many times in that past from, multiple sources.  Apparently, a recently wed woman was preparing a pot roast for her new husband.  The husband watched as she cut two ends off the roast before placing it in the pot, even though the pot had plenty of room to accommodate the original size.  Curious, he asked why she did this.  She replied, “I don’t really know.  That’s how I learned to prepare it from my mom.”  With her curiosity piqued, the woman called her mother to get the answer.  The mother responded, “That’s how I learned to prepare it from your grandmother.”  Calling her grandmother, she learned that her grandmother learned to do this from her great-grandmother.  The next time she visited her great-grandmother, who was still alive but now unable to cook, she asked the same question.  The elderly matron’s answer was, “I had to cut off both ends of the roasts to fit them in my pot, which was small.”

I believe this is what we have been doing when it comes to the way we organize, structure and design automated, as well as manual, tests.   We do things the way we do them simply because that is how we were taught, rather than recognizing the consequences and asking, “Why do we do it this way?”  In the case of the post roast, the only loss was a small part of a meal.  In our case, we experience ever-increasing and burdensome costs and delays in software development.

As the number of requirements that have ever been written for an application grows, so does the number of tests, exponentially.  The impact on manual testing is mitigated by the fact that we usually don’t go back over all test cases ever written in order to keep them up to date.  However, with automated testing, that is exactly what we expect of ourselves.  Trying to maintain all existing automated test cases is incredibly laborious — just the opposite of what automated testing promises through re-use.  We never seem to be able to get caught up, regardless how many different tools, tricks and resources we throw at the problem.  Eventually, when the burden is too great, risk gets introduced as we negotiate compromises in order to meet aggressive market demand and competition (e.g. execute highest priority tests only, accept failing tests that are lower priority, execute automation only for new features and trust manual release testing as a “light regression”).

I believe the cause of this ever-growing burden in testing is that we define and organize our tests to align with the requirements.  These practices from long past accommodate human limitations in verification and tracking:  we can only really focus on one requirement (or a small number of requirements) at a time to verify them reliably, and this method of organization is a sort of “checklist” for us (audit-ability) .  In manual testing, tests are often documented by requirement and organized by release (or iteration).  In subsequent releases, the list of manual tests to be executed is filtered down (e.g. by priority); tests not run regularly fall out of date (as they are no longer referenced).

We’ve carried that same practice into test automation, even though computers are able to focus on any number of requirements simultaneously and report any number of requirements covered.  The dream of test automation is that testing for all requirements, both past and present, can be executed successfully and frequently.  However, due to the same ever-growing size of the test library, it becomes infeasible to keep them all up to date.  As more of the “old” tests fail, compromises are made, like accepting builds when there are failing tests, or disabling tests that are considered low-priority.

In the case of requirements management, as new requirements documents (or user stories) are produced, previous requirements documents are ignored rather than investing time and effort in keeping them precisely updated.  Since we don’t expect all past requirements to remain up to date, maybe the tests for past requirements could be ignored?  (In fact, some companies do just that as they introduce agile approaches like Scrum:  each team focuses almost exclusively on testing of new requirements, leaving other testing to be done during a ‘Release Validation’ phase).  However, this introduces a level of risk that should be considered unacceptable.

Interestingly enough, unit testing doesn’t seem to face this same issue.  Each unit test, developed properly, describes a specific requirement.  If followed aggressively by a team (as it should be), every active requirement is verified at the component level at any given time.  So if the growing list of requirements isn’t resulting in an unwieldy number of unit tests, maybe there is hope!  Let’s see if we can identify why this is the case, and if our observation can provide a solution for higher level test automation (integration testing).

In unit testing, for every requirement that introduces completely new functionality, new tests are developed (as expected).  For every requirement that modifies functionality, existing tests are modified appropriately.  For every requirement that deprecates functionality, tests are modified or deprecated.  So why is it feasible to maintain unit testing in this way, while automated system-integration and application acceptance testing becomes so burdensome that it begins to fall under its own weight?  The answer is in how the tests are organized and structured.

Automated system-integration and application acceptance tests are commonly organized according to requirement (or feature) and release.  Unit tests are organized by component.  Rather than adding unit test code specifically for a requirement, we write unit test code specifically for a component.  Unit tests are subject-oriented (i.e. application-oriented), rather than requirements-oriented.

Using a similar approach, automated system-integration and application acceptance tests can be organized according to the views (or pages) of an application, rather than requirements (or functional feature across views, which has the same negative result).  Doing so would better facilitate usage of a common validator for each view, making it possible to verify any number of requirements on each test execution.  Imagine how much easier it would be to find and update existing tests.  Imagine having the number of tests to maintain remain relative to the number of “active” requirements (the same way unit tests do).  The library of tests would change in size, both increasing and decreasing, along with the complexity of the application, NOT based on the total number of requirements ever written for a given application.

Reconciling to a subject-oriented approach to automated (and manual) test organization does require fundamental changes in how tests are structured and how results are reported.  However, the rewards are well worth the effort.  In fact, changes in orientation can be introduced in a manner that causes little disruption to the current testing effort as the requirements-oriented approach is replaced over time.

Be reconciled!

Leave a Reply