Hey! Software engineers out there! Have you ever considered honestly the idiocy of how we approach testing of fully integrated software systems? If we were to apply our approach to any other mature industry, we would be laughed out of existence.
Can you imagine a car manufacturer, in crash testing their vehicle, using several crash dummies, each with sensors for one particular body part? They set up the dummy with head sensors, crash the car head-on, then read the results to ensure compliance with regulatory standards. They then set up the dummy with torso sensors, crash the car head-on again, then read the results. Next, they set up the dummy with head sensors again, crash the car side-impact.
How long to you think that car manufacturer could remain in business? The cost in human and material resources would be astronomical. The delay in shipment would keep them out of the market, being far out-pace by their competition. The approach of using one sensor per test may be best practice for testing a component (e.g. an airbag) pre-assembly, but is asinine when testing a post-assembly product (e.g. a vehicle). Yet this is commonly the approach used in testing fully integrated software systems, each test with a single verification (or sensor, if you will). Is it any wonder that the cost of testing fully integrated systems is so high, and why there are so many delays in delivering finished product?
Now imagine applying to software development the approach used in mature industries. Given data in a unique scenario, perform a set of actions to a destination, with sensors (verifications) in place to confirm any number of result states, each responding with feedback regardless of how many indicate failures (sensors = assertions). A multitude of individual results can now be measured against any number of requirements, in an instant. No sensor is dependent on the success of any other sensor. This would be similar to setting up a test dummy with head, torso, arm and leg sensors, submitting the vehicle to a head-on collision, then verifying the results of each and every sensor. The results of each sensor are considered individually. Use the same “fully sensorized” dummy in every crash scenario, and the number of unique test executions required is greatly reduced. Doing the same for each software scenario has the same result: smaller, more manageable test library, and greatly reduced number of unique test executions.
By reconciling our approach to validation of post-assembly software with that used by other mature industries, I believe we can maintain comprehensive, resilient, effective and efficient testing of fully integrated software systems. The result promises to be higher-quality software delivered in a sustainable release cycle and greatly reduced cost.