If you haven’t been following, it has been a while since my last post on this topic. I had a bit of distraction the last few weeks, but I am back and ready to go!
When we last left off we had just gotten our BATs as part of the acceptance criteria for any new backlogs that are worked on. This puts us at a point where we could really say that we have successfully implemented BAT testing.
You don’t want to get too comfortable just yet though, because the next hurdle you will most likely face will be the problem of not having enough time to execute all your tests.
You want to think about this ahead of time
Nothing worse than getting everything going and then not being able to execute the entire test suite, because you didn’t plan ahead.
You don’t want to get to the point where confidence in your BAT suite it lost because you are not able to get all the tests executed in a reasonable amount of time.
The more frequently your tests are run the more value they have.
By reducing cycle time from the time a breaking change is made in the system and the time it is discovered, you greatly reduce the risk it imposes on your software and you decrease the scope of the code changes in which it could have occurred.
To put it simply, the faster you can find out you broke something, the more likely you can fix it before it does damage, and the more likely you will be to know what thing you did caused the breakage.
How can we reduce cycle time?
There are a few different strategies we can employ and we can mix and match some of these strategies.
Straight forward parallelization
The best and most effective thing you can do is to take a given test run, split up the execution of those tests on multiple machines and execute them in parallel.
This approach is going to give you the best bang for your buck. You should really try to get some amount of parallelization going before attempting any other solution, since it is going to make a huge impact on the total execution time for your tests without making any sacrifices.
There are even many ways you can mix and match to do parallelization:
- Use multiple physical machines
- Run many virtual machines on a single host
- Put an executor program on every team member’s machines that will execute tests when that machine is idle or put into test execution mode (perhaps at night)
- Use a cloud based computing platform to execute your tests
- Run multiple browser instances on a single machine
With this approach, you would preload some of the test data that your tests might generate by manually clicking through screens.
This is fairly hard to explain, so let me give you an example:
Suppose you had a set of tests that all involved creating customers, but each customer you create in the system takes about 3 minutes to create by clicking through the screens to get them into the system.
We don’t need to have 500 tests all executing the same exact logic in the system 500 times for 3 minutes just to generate all the customer data that will be used in the tests.
Instead, we can leave a few tests that are exercising the customer creation functionality, and we can execute a SQL script to push all the other customer data into the test database for the other tests to use.
Using this technique we might be able to reduce our total execution time by 3 minutes * each test, or about 25 hours for 500 tests.
This can be a huge savings in time, and it doesn’t come at that high of a cost. The sanctity of our tests is slightly compromised, but we are taking a calculated risk here knowing that we already have covered the area of execution which we are preloading data for.
Consider this technique when you notice certain things in the system taking a very long time to do.
Test runs by area
With this technique, we can reduce the total execution time in a given period by splitting up test areas and running them either at different times or in response to changes in certain areas.
You have to be very careful with this approach, because if you don’t do it correctly, you can start to erode some of the safety your BAT tests are providing you.
I would only do something like this as a last resort, because it is so easy to virtualize today, and hardware is so cheap.
I’d much rather run too many tests than too few.
With test randomization, we are going to take our total desired execution time, and divide it by the average time for running a test. We then can use that number of tests to run to randomize the execution of our tests each time we run them and only run the number or tests that will fit in the desired execution time.
This choice is also a compromise that I typically don’t like to take.
It can be very useful though, combined with other techniques when you still don’t have enough time to execute your entire test suite.
The basic idea here is that you are going to randomly run tests each time to fill up the time you have to run tests.
This one seems fairly obvious, but can be very helpful.
Often I will see teams starting out with automation, trying to write way too many BAT tests for a given feature. Sure, with automated tests it is possible for run a test for every single possible combination of values in your 5 drop downs, but will it really benefit you?
In many cases you have to think about what you are trying to protect against with your BATs. Sometimes running every combination of data selection choices is going to be important, but other times you are only going to need to write tests to test a few of the happy path scenarios.
It is important to find a balance between test coverage and test volume and not just for execution time. There is a logistical overhead to having a large volume of mostly redundant tests.
So even though this technique might seem dangerous and counter-productive, I will almost always employ it to some degree.
Here are some of the things you might want to watch out for as you are scaling out and streamlining your execution of BATs:
- Parallelization issues. If you are using shared state, you can run into big trouble when your tests are executing in parallel. There are many manifestations of this. You could have issues at the database level or at the local machine memory level. The best way to avoid this kind of problem is to use separate virtual machines for each test execution, and not reuse data setup between test cases.
- Ineffective error reporting. If you run a huge volume of tests in parallel, you better have a good way to sort through the results. It is much harder to figure out why things failed when they are run across multiple machines.
- Test order dependencies. Make sure tests don’t rely on being run in a certain order or you will have lots of pain when you disrupt that order.
- Environment setup. Make sure all your test execution environments are exactly the same unless you are specifically testing different environments for execution. You don’t want tests failing on one machine but passing on another.