Simplified Testing

Written By John Sonmez

I've been struggling with the collision of worlds from the waterfall approach to testing and the agile approach to testing.  There really isn't a good solid statement being made about what “end to end” and “system testing” look like in agile. It's time we simplified testing.

In concordance with keeping things simple, let's break it down starting with the end goal: to produce shippable code at the end of an iteration.

This comes from the agile principle of:

Deliver working software frequently, from a

couple of weeks to a couple of months, with a

preference to the shorter timescale.

In order to deliver working software, that software must be usable or complete at that point.  Almost every agile process involves completing a user story before moving on to the next (at least all the user stories in an iteration.)

Shippable software is software that has been completely tested.   Getting to that point in one iteration is not something that is attainable if you hold onto the fallacy of “system testing.”  What is “system” or “end to end testing,” anyway?  I have never seen a true system level defect that did not exist also at a functional level.  What value are you getting by testing the whole system when you only changed a small part of the functionality?  And when you find defects at this high level of testing you end up having to triage them down to a root cause.

So what if we cut that out?  What if instead we said all desired functionality in the system is represented by automated tests which are produced in conjunction with the code?

What if we did 2 very simple things:

  1. For each user story we write automated tests that define what we expect from the system; when those pass the story is done. Check out Kent Beck's book, Test-Driven Development.
  2. Each time we finish a story we run all the previous automated tests to ensure nothing is broken. A good general book on testing is Testing Computer Software.

If we do those 2 things and we make sure the user of the system (or person representing the user) has agreed that the automated tests adequately cover what they want the story to accomplish, is there a need to do anything else?  Or is it perhaps that this very complex methodology of testing really is just that simple?