If you’ve been following me from the beginning of the Back to Basics series, you’ll know that I set out to reevaluate some of the commonly held truths of what best practices are, especially in regards to unit testing, dependency injection and inversion of control containers.
We’ve talked about what an interface is, cohesion and coupling, and even went a little bit off track to talk about sorting for a bit.
One of the reoccurring themes that kept showing up in most of the posts was unit testing. I talked about why unit testing is hard, and I defined three levels of unit testing.
- Level 1 – we have a single class with no external dependencies and no state. We are just testing an algorithm.
- Level 2 – we have a single class with no external dependencies but it does have state. We are setting up an object and testing it as a whole.
- Level 3 – we have a single class with at least one external dependency, but it does not depend on its own internal state.
- Level 4 – we have a single class with at least one external dependency and depends on its own internal state.
Throughout this series I ended up tearing down using interfaces with only single non-unit test implementation. I criticized the overuse of dependency injection for the sole purpose of unit testing. I attacked a large portion of best practices that I felt were only really being used in order to be able to unit test classes in isolation.
But, I never offered a solution. I told you what was bad, but I never told you what was good.
I said don’t create all these extra interfaces, use IoC containers all over your app, and mocks everywhere just for the purpose of being able to isolate a class you want to unit test, but when you asked me what to do instead, I said “I don’t know, I just know what we are doing is wrong and we need to stop.”
Well, that is no answer, but I intend to give one now. I’ve been thinking about this for months, researching the topic and experimenting on my own.
I finally have an answer
But, before I give you it, I want to give you a little background on my position on the subject matter.
I come from a pretty solid background of unit testing and test driven development. I have been preaching both for at least the last 7 years.
I was on board from the beginning with dependency injection and IoC containers. I had even rolled my own as a way to facilitate isolating dependencies for true unit tests.
I think unit testing and TDD are very good skills to have. I think everyone should learn them. TDD truly helps you write object oriented code with small concentrated areas of responsibility.
But, after all this time I have finally concluded, for the most part, that unit tests and practicing TDD in general do more good for the coder than the software.
What? How can I speak such blasphemy?
The truth of the matter is that I have personally grown as a developer by learning and practicing TDD, which has lead me to build better software, but not because the unit tests themselves did much.
What happened is that while I was feeling all that pain of creating mocks for dependencies and trying to unit test code after I had written it, I was learning to reduce dependencies and how to create proper abstractions.
I feel like I learned the most when the IoC frameworks were the weakest, because I was forced to minimize dependencies for the pain of trying to create so many mocks or not being able to unit test a class in isolation at all.
I’ve gotten to the point now where two things have happened:
- I don’t need the TDD training wheels anymore. I don’t pretend to be a coding god or demi-god of some sort, but in general the code I write that is done in a TDD or BDD style is almost exactly the same as the code I write without it.
- The IoC containers have made it so easy to pass 50 dependencies into my constructor that I am no longer feeling the pain that caused my unit tests to cause me to write better code.
What I find myself ending up with now when I write unit tests is 70% mocking code that verifies that my code calls certain methods in a certain order.
Many times I can’t even be sure if my unit test is actually testing what I think it is, because it is so complex.
Umm, did you say you had an answer, dude?
Yes, I do have an answer. I just wanted to make sure you understand where I am coming from before I throw out all these years of practical knowledge and good practices.
I am not the enemy.
My answer to the problem of what to do if you shouldn’t be using IoC containers and interfaces all over your code base just for the purpose of unit testing, is to take a two pronged approach.
- Mostly only write level 1 or level 2 unit tests. Occasionally write level 3 unit tests if you only have 1 or possibly 2 dependencies. (I’ll talk about more how to do this in my next post)
- Spend a majority of your effort, all the time you would have spent writing unit tests, instead writing what I will call blackbox automated tests or BATs. (I used to call this automated functional tests, but I think that name is too ambiguous.)
I intend to drill really deep into these approaches in some upcoming posts, but I want to briefly talk about why I am suggesting these two things in place of traditional BDD or TDD approaches.
What are the benefits?
The first obvious benefit is that you won’t be complicating your production code with complex frameworks for injecting dependencies and other clever things that really amount to making unit testing easier.
Again, I am not saying you shouldn’t ever use dependency injection, interfaces or IoC containers. I am just saying you should use them when they provide a real tangible value (which most of the time is going to require alternate non-unit test implementations of an interface.)
Think about how much simpler your code would be if you just went ahead and new’d up a concrete class when you needed it. If you didn’t create an extra interface for it, and then pass it in the constructor. You just used it where you needed it and that was that.
The second benefit is that you won’t spend so much time writing hard unit tests. I know that when I am writing code for a feature I usually spend at least half the amount of time writing unit tests. This is mostly because I am writing level 3 and level 4 unit tests, which require a large number of mocks.
Mocks kill us. Mocking has a negative ROI. Not only is creating them expensive in terms of time, but it also strongly couples our test classes to the system and makes them very fragile. Plus, mocking adds huge amounts of complexity to unit tests. Mocking usually ends up causing our unit test code to become unreadable, which makes it almost worthless.
I’ve been writing mocks for years. I know just about every trick in the book. I can show you how to do it in Java, in C#, even in C++. It is always painful, even with auto-mocking libraries.
By skipping the hard unit tests and finding smart ways to make more classes only require level 1 and level 2 unit tests, you are making your job a whole lot easier and maximizing on the activities that give you a high ROI. Level 1 and level 2 unit tests, in my estimation, give very high ROIs.
The thirds benefit is that blackbox automated tests are the most valuable tests in your entire system and now you’ll be writing more of them. There are many names for these tests, I am calling them BATs now, but basically this is what most companies call automation. Unfortunately, most companies leave this job to a QA automation engineer instead of the development teams. Don’t get me wrong, QA automation engineers are great, but there aren’t many of them, good ones are very expensive, and the responsibility shouldn’t lie squarely on their shoulders.
BATs test the whole system working together. BATs are your automated regression tests for the entire system. BATs are automated customer acceptance tests and the ROI for each line for code in a BAT can be much higher than the ROI of each line of production code.
Why? How is this even possible? It’s all about leverage baby. Each line of code in a BAT may be exercising anywhere from 5 to 500 lines of production code, which is quite the opposite case of a unit test where each line of unit test code might only be testing a 1/8th or 1/16th a line of production code on average (depending on code coverage numbers being reached.)
I’ll save the detail for later posts, but it is my strong opinion that a majority of a development teams effort should be put in BATs, because BATs
- Have high value to the customer
- Regression test the entire system
- Have a huge ROI per line of code (if you create a proper BAT framework)
Imagine how much higher quality your software would be if you had a BAT for each backlog item in your system which you could run every single iteration of your development process. Imagine how confident you would be in making changes to the system, knowing that you have an automated set of tests that will catch almost any break in functionality.
Don’t you think that is worth giving up writing level 3 and level 4 unit tests, which are already painful and not very fun to begin with to achieve?
In my future posts on the Back to Basics series, I will cover in-depth how to push more of your code into level 1 and level 2 unit tests by extracting logic out to separate classes that have no dependencies, and I will talk more about BATs, and how to get started and be successful using them. (Hint: you need a good BAT framework.)