Software Testing for Dummies – exposed and ridiculed

Copyright: JorgeVilla / 123RF Stock Photo

Hey!  Software engineers out there!  Have you ever considered honestly the idiocy of how we approach testing of fully integrated software systems?  If we were to apply our approach to any other mature industry, we would be laughed out of existence.

Can you imagine a car manufacturer, in crash testing their vehicle, using several crash dummies, each with sensors for one particular body part? They set up the dummy with head sensors, crash the car head-on, then read the results to ensure compliance with regulatory standards. They then set up the dummy with torso sensors, crash the car head-on again, then read the results. Next, they set up the dummy with head sensors again, crash the car side-impact.

How long to you think that car manufacturer could remain in business?  The cost in human and material resources would be astronomical.  The delay in shipment would keep them out of the market, being far out-pace by their competition.  The approach of using one sensor per test may be best practice for testing a component (e.g. an airbag) pre-assembly, but is asinine when testing a post-assembly product (e.g. a vehicle). Yet this is commonly the approach used in testing fully integrated software systems, each test with a single verification (or sensor, if you will).  Is it any wonder that the cost of testing fully integrated systems is so high, and why there are so many delays in delivering finished product?

Now imagine applying to software development the approach used in mature industries.  Given data in a unique scenario, perform a set of actions to a destination, with sensors (verifications) in place to confirm any number of result states, each responding with feedback regardless of how many indicate failures (sensors = assertions).  A multitude of individual results can now be measured against any number of requirements, in an instant.  No sensor is dependent on the success of any other sensor.  This would be similar to setting up a test dummy with head, torso, arm and leg sensors, submitting the vehicle to a head-on collision, then verifying the results of each and every sensor.  The results of each sensor are considered individually. Use the same “fully sensorized” dummy in every crash scenario, and the number of unique test executions required is greatly reduced.  Doing the same for each software scenario has the same result: smaller, more manageable test library, and greatly reduced number of unique test executions.

By reconciling our approach to validation of post-assembly software with that used by other mature industries, I believe we can maintain comprehensive, resilient, effective and efficient testing of fully integrated software systems.  The result promises to be higher-quality software delivered in a sustainable release cycle and greatly reduced cost.

Be reconciled…

2 thoughts on “Software Testing for Dummies – exposed and ridiculed

  1. this is great, Craig, It certainly makes sense in the context you provide but how applicable is it to specific testing scenarios? Are there situations where this methodology has clear advantages? Disadvantages?

    • Aaron,
      I have to start by apologizing for the long delay in my reply. Concerning your question: other than external processes and expectations, I can’t think of a single instance in which this approach is not only applicable, but has great advantage. Traditionally, manual and automated tests are organized by things like Release, Sprint, User Story and/or Requirement. This facilitates confirmation that specific requirements have been validated by testing. Even with that approach to test organization, comprehensiveness of verification will be increased without a corresponding increase to execution time.

      However, if tests are organized by the system under test (application, orchestration web services, micro services, etc.), then the number of tests executed can be greatly reduced while maintaining comprehensive coverage. The only disadvantage I have encountered is, for some clients, the need to introduce run-time compilation of a coverage report, aligning each test with the Requirement, User Story, Sprint and/or Release being validated.

Leave a Reply