I come from a QA background.
I started off my career doing testing. I learned about testing the same way I learned about development… reading lots of books and applying what I learned.
This is how you are supposed to do QA:
- Write a test plan for the project.
- Gather requirements. Put them into a repository and give them cold hard labels like REQ1.2.5.
- Create test cases from the requirements. Store those test cases into a tracking system which can map them back to individual requirements.
- Execute test cases, each test step should either pass or fail. Record the results. File any bugs found into the bug tracking system which maps the bug to the test that failed and the associated requirement.
There is more to it, but that is the basic outline.
But it doesn’t work in Agile!
Heck, it doesn’t work anywhere. At least not the requirements part. That is the part that is always missing, or woefully inadequate.
Don’t panic though. It makes sense. Let’s think this through. The reason we all started jumping off the waterfall ship and into Agile is because we realized that it is almost impossible and certainly worthless to try and define requirements up front.
With that idea in mind, how do we test? How can we test in an Agile environment?
First goal is automation
Let me explain to you why this is the most important part. It may seem a little out of order, but bear with me.
When you are doing iterative development, you are building things little by little. You're not waiting until the end to test things, so there is a really good chance something you do in any iteration will break something you did earlier.
How do you deal with this? It’s called “regression.” Yes, you probably need it in waterfall, but in Agile it is an absolute must, otherwise you’ll find that for every step forward you take, you take a step backward as you fix something that you broke.
Running an Agile project without automation is like walking the tight-rope without a safety net, except the tight-rope is on fire, someone is strumming it like a banjo, and you're wearing two left shoes.
People groan when I say this, but I firmly believe it and I have done it several times.
Part of every backlog’s done criteria should be that it has automated tests!
Unless you have automated tests for each piece of functionality you build (or at least the ones you care if they stay working), or an army of manual testers running your regression suite every iteration, your software will break in your customer’s hands.
Next goal is change the focus
Testing is traditionally an activity that happens at the end of development and is the gate-keeping for software. It has to go through QA before it gets released.
That is not a good model for Agile, because what happens when there is a bug? If the testing happens at the end of the iteration and there is a bug found, there is no time to fix it.
What if we change the order of things, change the focus?
When you write your code, you write unit tests first, right? (Or if you like BDD, you write the specifications, or behaviors.) And you run them continuously as you're working on your code to make sure you don’t break them, right?
We can apply the same thing at the higher level.
- We can focus on creating a failing automated test before implementing the software.
- We can iteratively build more failing automated tests and make each one pass by building the software.
- We can run those tests as we build the software.
How many times have you built a feature only to have it tested and find out the test cases were not at all testing what you built? (I’ll raise my hand.)
That happens when we try to independently create the test cases from the development of the software.
What we want to do is blur the lines between testing and development. We want to get close to continuous testing. Just like we do when we are running unit tests as we are writing code. Short feedback cycles are the key.
Sounds good, but how do we do it?
Here is a walk through of what your workflow might be when doing Agile testing.
- Backlog item is pulled into iteration.
- Team members talk to backlog owner about the backlog. The goal of this conversation is not to get all the details of the backlog, but to get enough information to understand the bigger picture of the backlog and get enough information to get started.
- Team members talk about what is the basic, first, automated test to write for this backlog.
- Team starts writing the automated test.
- Team starts writing the code to make the automated test pass.
- Team goes back to backlog owner to demonstrate completed parts, get feedback, and ask for more clarification.
- Team continues to write automated tests, make them pass, and have conversations and demonstrations with backlog owner.
- When backlog owner is happy and backlog is complete, and everyone considers it well-tested, the backlog is done.
Observations about that process:
- Steps can happen parallel or out of order, steps can overlap.
- Steps can be split among team members.
- I erased the words “QA person” and “Developer” and replaced them with “Team,” we don’t care so much about roles, we want to blur those lines as much as possible.
- There aren’t really phases, for development, QA, etc. Development involves testing as you go, when the backlog is done being “developed,” it is done.
- Focus is on making customer happy, not passing an arbitrary set of requirements the customer gave you upfront, or you tried to interpret. Take a look at my post on comparing user stories requirements gathering to hanging a picture for more on this.
- There is no useless documentation. All documentation is executable.
- There is no “gate-keeper,” quality is being built as we go, we are not trying to “test it in” at the end of the iteration.
What is the role of the QA person?
It might seem that we have eliminated that role by this process, or diminished the value, but nothing could be further from the truth.
What we have actually done here is elevated the role from writing documentation and manually executing tests to being the expert on quality and the representative of the customer on the team.
You can think of the QA person as the “quality coach” for the team. They might not always be the one creating the automated tests or running them, but they are one of the main forces guiding the direction of the creation of those tests, and making sure the customer’s interests are represented.
The QA team members will also occasionally manually test things that cannot be automated, and exploratory test as needed, but the focus should shift towards being the “quality coach” for the team instead of doing all of the quality work.
I found this excellent presentation on Agile Testing by Elisabeth Hendrickson from Quality Tree Software if you are interested in further reading on the topic.