Requirements-oriented Test Case Organization — threatening to crush your quality efforts?

GettyImages-77381803

I’m reminded of a story I heard many times in that past from, multiple sources.  Apparently, a recently wed woman was preparing a pot roast for her new husband.  The husband watched as she cut two ends off the roast before placing it in the pot, even though the pot had plenty of room to accommodate the original size.  Curious, he asked why she did this.  She replied, “I don’t really know.  That’s how I learned to prepare it from my mom.”  With her curiosity piqued, the woman called her mother to get the answer.  The mother responded, “That’s how I learned to prepare it from your grandmother.”  Calling her grandmother, she learned that her grandmother learned to do this from her great-grandmother.  The next time she visited her great-grandmother, who was still alive but now unable to cook, she asked the same question.  The elderly matron’s answer was, “I had to cut off both ends of the roasts to fit them in my pot, which was small.”

I believe this is what we have been doing when it comes to the way we organize, structure and design automated, as well as manual, tests.   We do things the way we do them simply because that is how we were taught, rather than recognizing the consequences and asking, “Why do we do it this way?”  In the case of the post roast, the only loss was a small part of a meal.  In our case, we experience ever-increasing and burdensome costs and delays in software development.

As the number of requirements that have ever been written for an application grows, so does the number of tests, exponentially.  The impact on manual testing is mitigated by the fact that we usually don’t go back over all test cases ever written in order to keep them up to date.  However, with automated testing, that is exactly what we expect of ourselves.  Trying to maintain all existing automated test cases is incredibly laborious — just the opposite of what automated testing promises through re-use.  We never seem to be able to get caught up, regardless how many different tools, tricks and resources we throw at the problem.  Eventually, when the burden is too great, risk gets introduced as we negotiate compromises in order to meet aggressive market demand and competition (e.g. execute highest priority tests only, accept failing tests that are lower priority, execute automation only for new features and trust manual release testing as a “light regression”).

I believe the cause of this ever-growing burden in testing is that we define and organize our tests to align with the requirements.  These practices from long past accommodate human limitations in verification and tracking:  we can only really focus on one requirement (or a small number of requirements) at a time to verify them reliably, and this method of organization is a sort of “checklist” for us (audit-ability) .  In manual testing, tests are often documented by requirement and organized by release (or iteration).  In subsequent releases, the list of manual tests to be executed is filtered down (e.g. by priority); tests not run regularly fall out of date (as they are no longer referenced).

We’ve carried that same practice into test automation, even though computers are able to focus on any number of requirements simultaneously and report any number of requirements covered.  The dream of test automation is that testing for all requirements, both past and present, can be executed successfully and frequently.  However, due to the same ever-growing size of the test library, it becomes infeasible to keep them all up to date.  As more of the “old” tests fail, compromises are made, like accepting builds when there are failing tests, or disabling tests that are considered low-priority.

In the case of requirements management, as new requirements documents (or user stories) are produced, previous requirements documents are ignored rather than investing time and effort in keeping them precisely updated.  Since we don’t expect all past requirements to remain up to date, maybe the tests for past requirements could be ignored?  (In fact, some companies do just that as they introduce agile approaches like Scrum:  each team focuses almost exclusively on testing of new requirements, leaving other testing to be done during a ‘Release Validation’ phase).  However, this introduces a level of risk that should be considered unacceptable.

Interestingly enough, unit testing doesn’t seem to face this same issue.  Each unit test, developed properly, describes a specific requirement.  If followed aggressively by a team (as it should be), every active requirement is verified at the component level at any given time.  So if the growing list of requirements isn’t resulting in an unwieldy number of unit tests, maybe there is hope!  Let’s see if we can identify why this is the case, and if our observation can provide a solution for higher level test automation (integration testing).

In unit testing, for every requirement that introduces completely new functionality, new tests are developed (as expected).  For every requirement that modifies functionality, existing tests are modified appropriately.  For every requirement that deprecates functionality, tests are modified or deprecated.  So why is it feasible to maintain unit testing in this way, while automated system-integration and application acceptance testing becomes so burdensome that it begins to fall under its own weight?  The answer is in how the tests are organized and structured.

Automated system-integration and application acceptance tests are commonly organized according to requirement (or feature) and release.  Unit tests are organized by component.  Rather than adding unit test code specifically for a requirement, we write unit test code specifically for a component.  Unit tests are subject-oriented (i.e. application-oriented), rather than requirements-oriented.

Using a similar approach, automated system-integration and application acceptance tests can be organized according to the views (or pages) of an application, rather than requirements (or functional feature across views, which has the same negative result).  Doing so would better facilitate usage of a common validator for each view, making it possible to verify any number of requirements on each test execution.  Imagine how much easier it would be to find and update existing tests.  Imagine having the number of tests to maintain remain relative to the number of “active” requirements (the same way unit tests do).  The library of tests would change in size, both increasing and decreasing, along with the complexity of the application, NOT based on the total number of requirements ever written for a given application.

Reconciling to a subject-oriented approach to automated (and manual) test organization does require fundamental changes in how tests are structured and how results are reported.  However, the rewards are well worth the effort.  In fact, changes in orientation can be introduced in a manner that causes little disruption to the current testing effort as the requirements-oriented approach is replaced over time.

Be reconciled!

Obey or Honor? — is there really a difference

Children, obey your parents in everything, for this pleases the Lord.
Colossians 3:20
English Standard Version
2001 by Crossway Bibles, a division of Good News Publishers
Honor your father and your mother, that your days may be long in the land that the Lord your God is giving you.
Exodus 20:12
English Standard Version
2001 by Crossway Bibles, a division of Good News Publishers

Sometimes, to my embarrassment, even in my 50’s I find myself making decisions contrary to wisdom learned many years ago, and contrary to things I taught my own children.  A few days ago, I was following a well set routine while packing for a business trip.  Lying conspicuously on the counter in my bathroom (so that I wouldn’t forget to pack it), was a shipping envelop containing medication (for migraines).  A couple of times my mom made the comment, “don’t forget your medication”.  In my mind I thought something like, “Ya, ya…I’ve got that covered.  I’m over 50 now, I think I can take care of myself.”  You guessed it…the one thing that didn’t get packed that day was the medication, which I could have used on the third day of that business trip.  Regardless how her way of showing concern made me feel (like the kid from my youth), I would have been better off interrupting my routine long enough to address her concern.

In my youth, I noticed that some verses in the bible instruct us to “obey” our parents, while others instruct us to “honor” our parents.  I gave thought to the difference between the two and realized that it is possible to obey without honoring, but impossible to truly honor without obeying.  I concluded that honoring someone includes the attitude of the heart and concern for how my decision reflects on another individual.  Having a respectful attitude toward parents and making decisions based on what would reflect well on them includes obedience.  It was in those early years of my life that I determined to honor my parents, rather than simply obey.

Now that was no small thing to commit to, especially in my youth.  Determining to live that way is one thing — it is quite another to follow through, especially at a time when you believe your parents’ instructions are based on out-of-date thinking, and that following their instruction will result in you missing a great opportunity.  Looking back, I can’t recall a single time in my life that I ignored my parents instructions and considered the consequences as beneficial.  At the same time, I can recall many times that I ignored my parents instruction and experienced some level of loss.  Not once did I make a choice that honored my parents and experience any harm or real loss.

Another thing I realized in my youth was that these instructions from the bible are not written to small children or youth — they are written to people in general.  Even as an adult in my 50’s, my parents are still my parents.  And I still benefit from honoring them by consulting them and considering how my decision might reflect on them.  I can’t tell you the number of times that consulting with my parents (and parents-in-law) has provided the wisdom I needed to make good life decisions.  When heeded, there was harm avoided and/or reward experienced; when ignored, the result was nearly always negative consequences and some level of regret.

Lastly, I notice that an “escape clause” is missing from the guidance provided by these verses, like “…IF your parents are nice people and have proven themselves wise and trustworthy….”  In my own life experience, and in my observation of countless others, even when we believe a parent has proven themselves to be unreliable, it is wise to at least consider their counsel and how a decision will reflect on them.  In fact, I can’t think of single case in which a parent’s command, heeded, would bring harm.  We may not like following through; we may experience disappointment and frustration, but I haven’t observed a case yet in which honoring would have brought harm.  And in nearly every case, there is reward, even if that reward is the nurturing of our relationship with our parents, earning their respect by the way we consider and respond to them (true even in our later years).

Reconciling ourselves to the wisdom of honoring our parents, regardless our age, isn’t easy, but is rewarding.  I used to tell my children, “To honor is better than to obey.”  It requires us to examine ourselves, to confront the excuses we have for ignoring their guidance, to listen to and consider them more than we speak and try to be understood.  And it requires faith that, regardless of what we think we may miss out on by honoring them in a given situation, there will be an even greater reward over time.  In my own personal life, I am experiencing an unbelievably good relationship with my own parents, that I believe is due in part to the decision I made in my youth:  to honor my parents, rather than simply obey them.

Be reconciled!

To “B”, or NOT to “B”…does it really matter?

For many managers, directors, and C-level folks in software development, ‘Plan A’ is to automate all testing, and ‘Plan B’ is to sustain a mix of automated and manual testing.  With that mindset, anything less than having all testing automated is consider failure, or concession.  But is it?…

In some cases, manual testing is a regulatory or organizational requirement.  In other cases, it remains in place to test the “look and feel” of an application (something for which automation is ineffective).  Even where it is feasible to fully replace manual testing with automated testing, a more reasonable approach is to accept manual testing as part of the process, and implement changes that reduce the effort involved.

Two things that represent the greatest cost of manual testing is:  testing to ensure that a new build is truly “ready to be tested” (environment validation or smoke testing), and identifying or creating test data for each test.  These are often manual tasks, and are impediments to test execution.  By addressing those impediments, the cost of manual testing will be immediately reduced, while improving the reliability of builds promoted to a ‘Test’ environment and laying the foundation for future test automation.

Expanding from that point will, over time, replace the most costly aspects of manual testing, re-purposing the role of those are executing manual tests.  Rather than replacing manual testers, you’ll be facilitating transition of those resources so that they remain highly valued, and remain integral to maintaining a high velocity of software development.

One path of testing maturity that I often advocate is:

  • Introduce a fully automated ‘Environment Validation’ suite — ensure each web page or web service endpoint is responsive, AND includes all properties that automated tests will be verifying
  • Implement a test data provider — accepts a request for data in a desired state, then returns existing data that is in the requested state OR creates the data if necessary
  • Introduce a fully automated ‘Release Validation’ suite — verify all system and business-critical functionality, answering the question, “Is there any unacceptable risk to revenue or reputation?”
  • Introduce a fully automated ‘Sprint Validation’ suite — verify all acceptance tests for the current sprint

By that point, it should be possible to identify the gap between the manually executed regression suite and the automated tests.

This approach makes irrelevant the original question of whether to go to full test automation or not, putting the focus rather on improving the efficiency and effectiveness of the current testing effort.  At the same time, a foundation for fully automated testing will be laid.

Reconciling to this mindset does require reconsidering the purpose of test automation in your organization; but from my experience it results in less anxiety, greater productivity, greater efficiency, and more reasonable expectations of test automation.

Be reconciled!

Software Testing for Dummies – exposed and ridiculed

Copyright: JorgeVilla / 123RF Stock Photo

Hey!  Software engineers out there!  Have you ever considered honestly the idiocy of how we approach testing of fully integrated software systems?  If we were to apply our approach to any other mature industry, we would be laughed out of existence.

Can you imagine a car manufacturer, in crash testing their vehicle, using several crash dummies, each with sensors for one particular body part? They set up the dummy with head sensors, crash the car head-on, then read the results to ensure compliance with regulatory standards. They then set up the dummy with torso sensors, crash the car head-on again, then read the results. Next, they set up the dummy with head sensors again, crash the car side-impact.

How long to you think that car manufacturer could remain in business?  The cost in human and material resources would be astronomical.  The delay in shipment would keep them out of the market, being far out-pace by their competition.  The approach of using one sensor per test may be best practice for testing a component (e.g. an airbag) pre-assembly, but is asinine when testing a post-assembly product (e.g. a vehicle). Yet this is commonly the approach used in testing fully integrated software systems, each test with a single verification (or sensor, if you will).  Is it any wonder that the cost of testing fully integrated systems is so high, and why there are so many delays in delivering finished product?

Now imagine applying to software development the approach used in mature industries.  Given data in a unique scenario, perform a set of actions to a destination, with sensors (verifications) in place to confirm any number of result states, each responding with feedback regardless of how many indicate failures (sensors = assertions).  A multitude of individual results can now be measured against any number of requirements, in an instant.  No sensor is dependent on the success of any other sensor.  This would be similar to setting up a test dummy with head, torso, arm and leg sensors, submitting the vehicle to a head-on collision, then verifying the results of each and every sensor.  The results of each sensor are considered individually. Use the same “fully sensorized” dummy in every crash scenario, and the number of unique test executions required is greatly reduced.  Doing the same for each software scenario has the same result: smaller, more manageable test library, and greatly reduced number of unique test executions.

By reconciling our approach to validation of post-assembly software with that used by other mature industries, I believe we can maintain comprehensive, resilient, effective and efficient testing of fully integrated software systems.  The result promises to be higher-quality software delivered in a sustainable release cycle and greatly reduced cost.

Be reconciled…

Mentors – agents of reconciliation

As a fan of the Seattle Mariners baseball team, I anticipated a strong 2015 season.  They already had a very strong pitching rotation, and acquired outstanding batter Nelson Cruz.  However, they struggled quite a bit; the most notable disappointment was the performance of Robinson Cano, a strong leader during the 2014 season.

Just before the All-Star Break, the Mariners hired Mariners Hall of Fame member, Edgar Martinez, as the batting coach.  During an interview, Robinson Cano commented on the new hire with great anticipation.  He obviously considered Edgar as more than a hitting coach — as a mentor.

I agree with the statement, “you are where you are today, because you chose to be here.”  Mentors are called upon when a person realizes that their current situation is the result of their own understanding and choices, and that it falls short of their desired situation.  They may lack the knowledge, experience or motivation required to implement the necessary change.  Mentors help align (or reconcile) our lives with the results we hope to experience.

In my opinion, everyone benefits from at least one mentor in their life, someone they trust to challenge their current ways of thinking, communicating and behaving.  Most of us desire to grow in at least one area of our lives, and that usually involves thoughts and activities we have not implemented before.  Mentors lend a hand in getting us to the next step of growth.

I recently heard of a ministry called Love Inc of Pierce County (Washington State), which has a mentoring program to help folks secure lasting employment.  Realizing this means more than just getting them hired, the ministry emphasizes guidance in how people think, communicate and behave.

Helping a person experience new life involves reconciling what they believe about their identify how they talk to and about others, the choices they make, habits they have and how they behave.  Persevering with them through these changes, empathizing while prodding, is an investment worth the effort.

Regardless of what your life is like today, choosing a mentor will certainly help ensure your success; being a mentor will help ensure the success of another.  Having a mentor promotes growth in your own life; being a mentor promotes growth in another’s life.  Be mentored, and be a mentor.

Be reconciled!

Goals and Commitments – the mirror and measurement of personal reconciliation

As I walk through the house, I will eventually pass by a mirror, and in my peripheral vision catch a glimpse of myself.  Unless something is very out of the ordinary, I pay little attention.  In fact, even when standing in front of the bathroom mirror I have only passing interest in what is reflected.  Periodically, however, I will scrutinize in detail the image before me, taking careful note of whether it aligns with my expectations and desires, and making adjustments when possible (in my case, that leaves few).  Tools used may be a beard trimmer or skin cream.  In doing so, to some degree of success, I keep my outward image aligned (or reconciled if you will) with my expectations.

As events occur throughout my day, I find myself periodically taking brief note of how my words and behavior align with my expectations of myself.  Even when involved in an activity intended to elicit self-evaluation (e.g. reading an article on “how to be a good man”, or “best software development practices”) I have little more than passing interest in how I truly align with what is described.  Periodically, I will scrutinize in detail my life, taking careful note of whether it aligns with my expectations and desires.  With life, however, making adjustments is even more challenging and time-consuming than with my physical image, involving changes in the way I think.  The tried and true tools used for this are goals and commitments.

Some may consider those words synonymous, but for me they are distinct, and both necessary for success.  One without the other will bear little fruit.  Either done poorly diminishes the effectiveness of the other.  Both done well always produces good quality fruit in the life of those who practice them.

The best goals come from detailed examination of an aspect of life with which we find ourselves dissatisfied.  It involves noting where we are today, where we would like to be, and one way in which we can get there — like planning a trip to Disneyland!  The best goals fulfill the well-documented “SMART” guidelines.

Commitments are statements of intention worded as if fulfillment were a certainty.  “I will pick you up at 7 pm tonight’ and “my bedroom will be picked up by 9 am” are commitments.  However, for life change (and I credit Dee Duke, Pastor of Jefferson Baptist Church in Jefferson, Oregon for this enlightenment), effective commitments are written, reviewed and reported (three R’s, if you will).  They involve accountability of fulfillment (can you say “reconciliation”?).

Recently I was urged by an inspiring young friend of mine (Valentin Calomme, of Brussels, Belgium) to take stock of where I am today in comparison with my Values, Mission and Key Areas of life.  It was long overdue.  The result was re-dedication to my written goals, review of these goals each day, and reporting of success to my friend on a weekly basis.  Already I find myself better focused on the things that matter most to me, and activities which align better with who I desire to be.

I encourage you to do the same: take a retreat (whether a trip to a secluded destination or an hour in a closed room of your house); identify in detail the kind of person you want to be (“effective at work”, “understanding as parent”, “lighter in body weight”); set SMART goals; share your goals with a trusted friend; and commit to fulfilling them.  Even if you fall short of your commitment, I guarantee that you will move closer to that person you desire to be.

Be reconciled!

Reconciliation

My first checking account (at age 14) came with a check register.  Before the age of electronically tracked debit-card transactions, the owner of a checking account used this device to keep track of the funds they had available to spend.  It was fun at first, making each entry and bringing the balance forward. The “fun” soon wore off, as did consistent practice.

My first bank statement came with detailed instructions on how to reconcile my account.  Put simply: compare what you believe to be true (my register) with what the bank believed (the statement).  This reconciliation gave me insight into the true state of my checking account.  As long as my belief and the bank’s belief could be reconciled, things were fine; if this wasn’t possible, trouble would certainly result.  Reconciliation was essential to maintaining a healthy relationship with the bank.

I now believe that reconciliation is essential to success in all aspects of life.  Let me repeat that: reconciliation is essential to success in ALL aspects of life.  Reconciliation exposes one person’s belief to another, giving each of them opportunity to gain understanding.  When we are reproved by another person, reconciliation corrects unhealthy behaviors and heals relationships.  After hearing of a new idea or approach, reconciliation produces new results which can be evaluated.

Reconciliation does not necessarily mean full acceptance of other beliefs, but does result in understanding of discrepancies.  When my recollection of an event differs from another’s, reconciliation is understanding and accepting the discrepancies.  It includes understanding of how those discrepancies affect communication, behavior and our relationship.

Once, while riding my motorcycle to work, as  I was taking a right-hand turn into a parking lot, a taxi cab came from behind and struck me while attempting to pass me on the right.  The cab stopped, and the driver immediately apologized but stated that it wasn’t his fault.  I was able to quickly reconcile our beliefs about what happened.  Noting the discrepancy and how it would impact our relationship, I politely chose not to engage in discussion and instead waited for the police to arrive.

I often find my belief about life, specifically my life, exposed to what other people believe to be true.  These are opportunities for reconciliation, or if you like, self-evaluation.  At the least this practice allows me to accept the differences between us and work within the limitations.  Even better is when it results in unity — adjusting our beliefs to be one and the same.  I find that the quality of each aspect of my life is directly related to the practice of reconciliation.  You may find this true for yourself.

Be reconciled!

Test Case Organization – interface alignment

Can you relate with this experience:

It’s the beginning of a release cycle, which includes a change to a feature which has existed in your application since before you were on the team.  From the first release of the application, your team has continued to develop and save test cases for regression testing, with the intent of keeping them updated.  The ever growing library of test cases are filed by release and feature.  After browsing through the library for some time without locating the tests for this feature, and with a deadline looming, you decide to develop your test cases from scratch, filing them as new feature tests for the current release.

The result of this approach is an unwieldy library of test cases, rarely referenced or reused.  I’ve come to believe the primary cause is the way in which test cases are organized and archived.  In most cases they are aligned by release.  This alignment favors and supports program and project management, over release and regression testing.  In comparison, the cost of testing is far higher than the cost of managing the overall release effort.

Beginning with the end in mind, the end-goal of any requirement and development effort is a feature utilized by an end-user.  That feature is most often part of a service or user interface.  The feature will be delivered as part of a release, but once delivered the feature is part of an interface.  So, the destination (the end) is the interface, NOT the release.

Since that is the case, what would the previous experience be like if the tests were developed and archived by interface rather than release:

It’s the beginning of a release cycle, which includes a change to a feature which has existed in your application for since before you were on the team.  From the first release of the application, your team has continued to develop and save test cases for regression testing, with the intent of keeping them updated.  The ever growing library of test cases are filed by interface.  After briefly browsing through the library to the interface being updated, with a deadline looming, you have plenty of time to review the existing test cases, make the necessary updates to align them with the new requirements, and annotate each with the current release and the requirement verified.

The result of this approach is an efficiently reusable library of tests suitable for both release and regression testing.  Each file will contain a history of each requirement verified, and release supported.  Accounting for the testing of all changes in a release will take locating and reviewing test files, but with this organization even that effort will be efficient.

Reconciling your current test library with this approach is best done over a period of time, starting with your current release.  The cost of transferring all existing tests to this new approach in one effort will be far higher than any benefit you may experience.  Instead, draw a line in the sand, leave the past behind, and develop just enough testing to support your current release effort.  For a time there may be need to reference previous tests during regression efforts, but this need will fade.

Be reconciled!

Validation Design – a more comprehensive approach

Habit #2 of Stephen Covey’s, “The 7 Habits of Highly Effective People” says,

Begin with the end in mind

The end of a well written test case is the validation.  And in my experience, beginning test development with the validation helps inform the development of the other phases of the test case — initialization and execution.

The most common practice in manual testing is to write a separate test case for verification of a specific requirement.  Each “passed” test case represents alignment of the application with a very specific requirement.  For manual testing, this approach makes sense, ensuring accurate and comprehensive accounting of each requirement.

However, this approach fails to leverage the computers ability to quickly and accurately verify any number of requirements, without the risk of overlooking a single one.  Though there is benefit to the speed at which the computer runs tests written in a manual style, automating tests using this approach will never unleash the full potential of the computer.

It is a widely held belief that a well written test verifies one and only one requirement.  To fully realize the potential of the computer, this definition must be expanded for automated testing: a well written test validates that the current state of a specific entity meets all related requirements.  Examples of entities are: web page, database record, or browser cookie.

Consider it this way: Given the related systems in an initial state, When a path is executed through the application to a specific destination, Then the state of a specific entity can be validated.  Combined with a modular test design, this approach supports well agile approaches like Business Driven Development (BDD).

Rather than numerous test cases using equivalent test data, navigating the same path through the application, with each verifying a very specific requirement, instead a single test can use one set of test data in an initial state, execute the necessary path, and validate any number of requirements for an entity.  The validation of an entity can be described using an individual verification for each requirement, making it easier to use the requirements to reconcile the actual results.

Reconciling your method of test development to this approach involves starting each test case with the validation statement.  In automated tests cases this line will look something like “expected.validate(actual);”, with the validation details encapsulated in the validate() method.  In manual tests, this line will be something like “Validate [web page]”, with a reference to the validation details.  Continue writing each preceding line of your test until the test data definition is in place.  For many, this will seem awkward at first, but I encourage you to follow this approach strictly for at least one software delivery cycle, then evaluate the overall results of this method.

Be reconciled!

Test Phases – the first step on a path to more resilient, efficient and informative test libraries

Having worked as an SDET on several great software development teams, I’ve had the opportunity to become familiar with various approaches to automated UI testing.  The truth is, there isn’t much variation at all.  The common approach is to develop a library of scripted automated test cases identical to a library of scripted manual test cases.

It may seem there is benefit to having the computer perform the same test steps that would be run manually, but there are heavy maintenance costs, and limitations to reporting and alignment with acceptance criteria.  This approach also fails to maximize the ability of the computer to perform a large number of validations in a minimum number of steps, providing accurate, detailed and meaningful results.

Every test case, whether manual or automated, includes three main phases:

  1. identify/create data to be used for the test
  2. navigate the necessary path through the UI using the identified data
  3. validate by comparing the actual results to the expected results.

These three phases are recognized in The Agile Alliance Guide which recommends their use in writing Acceptance Criteria for User Stories.  That guide uses the terms Given, When and Then, which are growing in use.  This format improves the readability of manual tests within a library and reveals commonalities, but there is little that can be done to take full advantage of identified commonlities.

Though some automated test libraries apply this format to group test steps and provide some code reuse, they tend to fall short in providing effective reuse at the level of the common test phases: initialization, execution, and validation.  Developing modules, if you will, at this higher “phase” level makes it possible to build a more resilient and efficient library of tests that provide more detailed and useful test results.

When tests reuse Initialization modules, reliable test data can be provided without the overhead of maintaining test-specific data.  When tests reuse Execution modules, resilient paths through the application can be maintained with minimal effort.  When tests reuse Validation modules, verification of specific requirements are readily available for review.

This modular approach results in much more thorough verification in fewer test executions, and a library of tests that invites change, making it much more responsive in an agile environment.

You can begin reconciling your method of test case development to this approach, whether working with an existing test library or starting from scratch.  The first step is to label clearly the three phases as they exist in tests you review or develop.  The comments “Initialization”, “Execution” and “Validation” (or if you prefer, Given, When and Then) can be used.  Doing so will provide the information needed to take this approach to the next level.

Be reconciled!