Tag Archives: TDD

My Views On Test Driven Development

I used to be a huge supporter of TDD, but lately, I’ve become much more pragmatic about it.
In this video, I discuss test driven development and what my current views on it are.

Full transcript:

John:               Hey, John Sonmez from simpleprogrammer.com.  This week, I want to talk about, yes, a technical topic.  I’m going to be talking about test driven development.  I’ve got a few controversial views on test driven development, but I’ll explain why I hold those views.  Hopefully, I’ll convince you at least to some degree that what I’m saying makes some sense.

I used to be a really big proponent of test driven development.  I used to really push for it really hard way back when hardly anyone was doing unit test.  Definitely, not very many people were doing test driven development.  I would often go into an organization or join a team, and I’d really push for test driven development and for unit testing.  This was a good thing.  I ended up improving a lot of the quality of code by pushing for test driven development and pushing for unit testing in general, but I was a little bit too strict about this.  I had code coverage requirements of around 95% asking that 95% of the production code be covered by tests.

Obviously, my views have changed on test driven development or I wouldn’t be telling you this.  They haven’t changed a huge degree.  They’ve changed to be more pragmatic.  At least that’s how I think of it.  Let me explain, first of all, why I think test driven development is good, especially if you’re not doing any kind of unit testing at all, I’d encourage you to do some kind of test driven development.  That’s because the biggest benefit for me for doing test driven development that I found is it helps you become better at writing good code.

The reason why this is true is because when you write the test first, it forces you to use the APIs of your code.  It forces you to use that production code so that influences you to make the structure of that code better for being used and to be more understandable.  When we just write a bunch of code first where you don’t think about how it’s going to be used, we are guessing at how people are going to use it or how easy this is going to be to use.  When you write unit test first and then you write the code to make those unit test pass, you’re basically starting out by defining the API which is really valuable for writing quality clear code that’s easy to understand.

That’s really the big benefit for me.  Of course, there’s regression as well.  You can run those unit tests but I tend to think that regression is not so important anymore as far as unit testing goes.  Now, don’t get me wrong.  You still have automated test at a higher level, but as far as unit testing goes those regression tests really don’t prove much.  I found that in most complicated software systems when you unit test fail, we have to go and try and debug this and figure out why this unit test fail and most of the time it’s a false alarm.

I would say that there were very few times that a unit test has failed where I said, “Oh, this caught a bug.”  More often than not, I’ve gone and said, “Well, is this test actually correct?” or, “Oh, yeah, yeah.  That was expected,” or, “This mock failed.”  That’s the bad side of it.

Let me tell you what I think now.  I think now that test driven is like training wheels for writing good code.  It should be used in certain cases but not all the time.

A lot of times we end up just blanket and force test driven development.  What we end up having is a code that has tons of mocks or dependencies passed in.  We might use inversion of control or dependency inversion containers to try and pass all kinds of dependencies into our code, and then we’re just going to mock it up in unit tests when we’re doing test driven development.  This is a bad way to go because those tests become harder to write than the actual code and the test code that we write ends up being error prone and we could have bugs in the test codes.  Who is going to test the test code?  Are we going to write tests to test the test code?  No, we’re not going to do that.

I think it makes a lot of sense to be pragmatic about it.  If you’re just starting out writing code, you should definitely be doing test driven development for all your code because you need to learn how to write a good code.  When I started doing test driven development it really taught me how to write good code, how to write good APIs and really improve the quality of my code.

If you’ve been writing code for a while, if you’ve been doing test driven development, you will probably find this point where you start to feel like these tests aren’t adding value.  When you feel like tests aren’t adding value, this is my personal belief, stop writing those tests.  You want to be able to test code with your test driven development or when you’re writing a unit test that makes sense where it’s going to give you some value from doing that.  If you feel like it’s not creating value, stop doing it because chances are it’s not creating value.  You can refactor your code and structure your code in such a way that it makes it easier to write unit tests.

I’ve written up on my blog about … there are only 2 roles of code.  If you do a search for 2 roles of code, you will find my post.  Basically, I say that there are algorithms and then there’s a code that ties everything together.  If you can separate your code out better into those 2 types and don’t have it mixed together in classes, then you can write unit tests or you can do test driven development on the algorithm part and not have a lot of dependencies, not have to use a lot of mocks.  Those tests are going to be highly valuable and it’s just going to be really beneficial to you.

If you have everything mixed together and you’re finding that you’re having to write a lot of mocks and you find that your test code is really complicated, then you might really want to question the value of doing test driven development in that case.

Overall, with my view, I think test driven development is good.  It’s really, really good for beginners.  If you’re starting out you need to start practicing this in order to become better at development.  As you get more experience, you have to use your common sense.  You have to be pragmatic about it.  If you feel like the tests aren’t adding value or they’re just wasting time, then that might be true.

Well, I hope this video didn’t get you too ticked off.  I definitely have a different view on test driven development than I did earlier in my career.  Like I said, this comes from my experience.  If you disagree with me, let me know.  Send me a comment down below or send me an e-mail.  Don’t forget to subscribe to this channel and I hope you have a good week, and I will talk to you again next time.  See you later.


The More I Know, the Less I Know

I used to be very confident in my abilities as a software developer.

I used to be able to walk up to a group of software developers and tell them exactly what they were doing wrong and exactly what was the “right” way to do things.

I used to be sure of this myself.

confident thumb The More I Know, the Less I Know

It wasn’t even that long ago.  Heck, when I look at the blog posts I wrote 3 years ago I have to slap myself upside my head in realization of just how stupid I was.

Not only was my writing bad, but some of my thoughts seem so immature and uneducated that it feels like a completely different person wrote them.

And I wrote those posts back when I knew it all.

The more I learn, the less I know

Lately I’ve been running into situations more and more often where I don’t have a good answer for problems.

I’ve found myself much more often giving 3 pieces of advice attached with pros and cons rather than giving a single absolute—like I would have done perhaps 3 years ago.

I’ve been finding as I have been learning more and more (the past 3 years have been an accelerated pace of growth for me,) that I am becoming less and less sure of what I know and more and more convinced that there is no such thing as a set of best practices.

I’ve even spent some time postulating on whether or not commonly held beliefs of best practices would be thrown completely out the window given a significant enough motivation to succeed.

My point is that the more doors I open, the more aware I become of the multitude of doors that exist.

doors thumb The More I Know, the Less I Know

It is not just the realization of what I don’t know, but also the realization of weakness of the foundation I am already standing on.

Taking it out of the meta-physical

Let’s drop down out of the philosophical discussion for a bit and talk about a real example.

Perhaps the biggest quandary I struggle with is whether or not to unit test or practice TDD and its variants.

The 3 years ago know-it-all version of me would tell you emphatically “yes, it is a best practice and you should definitely do it all the time.”

The more pragmatic version of me today says, in a much more uncertain tone, “perhaps.”

I don’t want to delve into the topic in this post since I am sure I could write volumes on my ponderings in this area, but I’ve come to a conclusion that it makes sense to write unit tests for code that has few or no dependencies and that it does not make sense to do so for other code.

From that I’ve also derived that I should strive to write code that separates algorithms from coordinators.

I still even feel today that my advice is not wholly sound.  I am convinced it is a better approach than 100% TDD and units tests, or no TDD and unit tests, but I am not convinced there isn’t a deeper understanding and truth that supersedes my current thoughts on the matter.

As you can imagine this is quite frustrating and unsettling.

Silver bullets and best practices

What I am coming to realize more and more is that there are no silver bullets and more surprisingly there are not even any such things as best practices.

silverbullet thumb The More I Know, the Less I Know

Now I’ve heard the adage of there being no silver bullets so many times that it makes me physically sick when I hear someone say it, because it is so cliché.

But, I’ve had a hard time swallowing the no best practices pill.

I feel like when I abandon this ship then I am sailing on my life raft in the middle of a vast ocean with no sails and no sense of what direction to go.

A corner-stone of my development career has been in the learning, applying and teaching of best practices.  If these things don’t exist, have I just been pedaling snake oil and drinking it myself?


Best practices are simply concrete applications of abstract principles in software development that we cannot directly grasp or see clearly enough to absolutely identify.

Breaking this down a bit, what I am saying is that best practices are not the things themselves to seek, but through the pursuit of best practices we can arrive at a better understanding of the principles that actually are unchanging and absolute.

Best practices are optimal strategies for dealing with the problems of software development based on a particular context.  That context is primarily defined by:

  • Language and technology choice
  • Meta-game (what other software developers and perceived best practices are generally in place and how software development is viewed and understood at a given time.)
  • Individual skill and growth (what keeps me straight might slow you down; depends on where you are in your journey.)

There is a gentle balance between process and pragmatism.

When you decide to make your cuts without the cutting guide, it can make you go faster, but only if you know exactly what you are doing.

Where I am now

Every time I open my mouth I feel like I am spewing a bunch of bull crap.

I don’t trust half of what I say, because I know so much of it is wrong.

Yet I have perhaps 10 times more knowledge and quite a bit more experience in regards to software development than I did just 3 years ago.

So what gives?

Overall, I think I am giving better advice based on more practical experience and knowledge, it is just that I am far more aware of my own short-comings and how stupid I am even today.

I have the curse and blessing of knowing that only half of what I am saying has any merit and the other half is utter crap.

Much of this stems from the realization that there are no absolute right ways to do things and best answers for many of the problems of software development.

I used to be under the impression that someone out there had the answer to the question of what is the right way to develop software.

clues thumb The More I Know, the Less I Know

I used to think that I was picking up bit of knowledge, clues, that were unraveling the mystery of software development.  That someday I would have all the pieces of understanding and tell others exactly how they should be developing software.

What I found instead was that not only does nobody know the “right” way to develop software, but that it is perhaps an unknowable truth.

The best we can do is try to learn from obvious mistakes we have made before, start with a process that has had some level of success, and modify what we do based on our observations.

We can’t even accurately measure anything about software development and to think we can is just foolishness.

From story points, to velocity, to lines of code per defect and so on and so forth, all of those things are not only impossible to accurately measure, but they don’t really tell us if we are doing better or not.

So, what is my point?

My point is simple.

I have learned that not only do I not have all the answers, but I never will.

What I have learned is always subject for debate and is very rarely absolute, so I should have strong convictions, but hold onto them loosely.

And most importantly, don’t be deceived into thinking there is a right way to develop software that can be known.  You can improve the way you develop software and your software development skills, but it will never be based on an absolute set of rules that come down from some magical process or technology.

If you like this post don’t forget to or subscribe to my RSS feed.

There Are Only Two Roles of Code

All code can be classified into two distinct roles; code that does work (algorithms) and code that coordinates work (coordinators).

The real complexity that gets introduced into a code bases is usually directly related to the creation of classes that group together both of these roles under one roof.

I’m guilty of it myself.  I would say that 90% of the code I have written does not nicely divide my classes into algorithms and coordinators.

Defining things a bit more clearly

Before I dive into why we should be dividing our code into clear algorithmic or coordinating classes, I want to take a moment to better define what I mean by algorithms and coordinators.

Most of us are familiar with common algorithms in Computer Science like a Bubble Sort or a Binary Search, but what we don’t often realize is that all of our code that does something useful contains within it an algorithm.

What I mean by this is that there is a clear distinct set of instructions or steps by which some problem is solved or some work is done.  That set of steps does not require external dependencies, it works solely on data, just like a Bubble Sort does not care what it is sorting.

Take a moment to wrap your head around this.  I had to double check myself a couple of times to make sure this conclusion was right, because it is so profound.

It is profound because it means that all the code we write is essentially just as testable, as provable and potentially as dependency free as a common sorting algorithm if only we can find the way to express it so.

What is left over in our program (if we extract out the algorithms) is just glue.

Think of it like a computer.  Computer electronics have two roles: doing work and binding together the stuff that does the work.  If you take out the CPU, the memory and all the other components that actually do some sort of work, you’ll be left with coordinators.  Wires and busses that bind together the components in the system.

Why dividing code into algorithms and coordinators is important.

So now that we understand that code could potentially be divided into two broad categories, the next question of course is why?  And can we even do it?

Let’s address why first.

The biggest benefit to pulling algorithmic code into separate classes from any coordinating code is that it allows the algorithmic code to be free of dependencies.  (Practically all dependencies.)

Once you free this algorithmic code of dependencies you’ll find 3 things immediately happen to that code:

  1. It becomes easier to unit test
  2. It becomes more reusable
  3. Its complexity is reduced

A long time ago before mocks were widely used and IoC containers were rarely used, TDD was hard.  It was really hard!

I remember when I was first standing on the street corners proclaiming that all code should be TDD with 100% code coverage.  I was thought pretty crazy at the time, because there really weren’t any mocking frameworks and no IoC containers, so if you wanted to write all your code using TDD approaches, you’d actually have to separate out your algorithms.  You’d have to write classes that had minimal dependencies if you wanted to be able to truly unit test them.

Then things got easier by getting harder.  Many developers started to realize that the reason why TDD was so hard was because in the real world we usually write code that has many dependencies.  The problem with dependencies is that we need a way to create fake versions of them.  The idea of mocking dependencies became so popular that entire architectures were based on the idea and IoC containers were brought forth.

mp900175522 thumb There Are Only Two Roles of CodeWe, as a development community, essentially swept the crumbs of difficult unit testing under the rug.  TDD and unit testing in general became ubiquitous with writing good code, but one of the most important values of TDD was left behind, the forced separation of algorithmic code from coordinating code.

TDD got easier, but only because we found a way to solve the problems of dependencies interfering with our class isolation by making it less painful to mock out and fake the dependencies rather than getting rid of them.

There is a better way!

We can still fix this problem, but we have to make a concerted effort to do so.  The current path of least resistance is to just use an IoC container and write unit tests full of mocks that break every time you do all but the most trivial refactoring on a piece of code.

Let me show you a pretty simple example, but one that I think clearly illustrates how code can be refactored to remove dependencies and clearly separate out logic.

Take a look at this simplified calculator class:

 public class Calculator
        private readonly IStorageService storageService;
        private List<int> history = new List<int>();
        private int sessionNumber = 1;
        private bool newSession;

        public Calculator(IStorageService storageService)
            this.storageService = storageService;

        public int Add(int firstNumber, int secondNumber)
                newSession = false;

            var result = firstNumber + secondNumber;

            return result;

        public List<int> GetHistory()
            if (storageService.IsServiceOnline())
                return storageService.GetHistorySession(sessionNumber);

            return new List<int>();

        public int Done()
            if (storageService.IsServiceOnline())
                foreach(var result in history)
                    storageService.Store(result, sessionNumber);
            newSession = true;
            return sessionNumber;


This class does simple add calculations and stores the results in a storage service while keeping track of the adding session.

It’s not extremely complicated code, but it is more than just an algorithm.  The Calculator class here is requiring a dependency on a storage service.

But this code can be rewritten to extract out the logic into another calculator class that has no dependencies and a coordinator class that really has no logic.

 public class Calculator_Mockless
        private readonly StorageService storageService;
        private readonly BasicCalculator basicCalculator;

        public Calculator_Mockless()
            this.storageService = new StorageService();
            this.basicCalculator = new BasicCalculator();

        public int Add(int firstNumber, int secondNumber)
            return basicCalculator.Add(firstNumber, secondNumber);

        public List<int> GetHistory()
            return storageService.

        public void Done()
            foreach(var result in basicCalculator.History)
                     .Store(result, basicCalculator.SessionNumber);


    public class BasicCalculator
        private bool newSession;

        public int SessionNumber { get; private set; }

        public IList<int> History { get; private set; }

        public BasicCalculator()
            History = new List<int>();
            SessionNumber = 1;
        public int Add(int firstNumber, int secondNumber)
            if (newSession)
                newSession = false;

            var result = firstNumber + secondNumber;

            return result; ;

        public void Done()
            newSession = true;


Now you can see that the BasicCalculator class has no external dependencies and thus can be easily unit tested.  It is also much easier to tell what it is doing because it contains all of the real logic, while the Calculator class has now become just a coordinator, coordinating calls between the two classes.

This is of course a very basic example, but it was not contrived.  What I mean by this is that even though this example is very simple, I didn’t purposely create this code so that I could easily extract out the logic into an algorithm class.

Parting advice

I’ve found that if you focus on eliminating mocks or even just having the mindset that you will not use mocks in your code, you can produce code from the get go that clearly separates algorithm from coordination.

I’m still working on mastering this skill myself, because it is quite difficult to do, but I believe the rewards are very high for those that can do it.  In code where I have been able to separate out algorithm from coordination, I have seen much better designs that were more maintainable and easier to understand.

I’ll be talking about and showing some more ways to do this in my talk at the Warm Crocodile conference next year.

The Purpose of Unit Testing

I was reminded yesterday that there are still many people out there who still don’t really understand the purpose of unit testing.

A funny shift happened in the last 5 or so years.

About 5 years ago, when I would suggest TDD or just doing some unit testing when creating code, I would get horrible responses back.  Many developers and managers didn’t understand why unit testing was important and thought it was just extra work.

More recently when I have heard people talking about unit testing, almost everyone agrees unit testing is a good idea, but not because they understand why, but because it is now expected in the programming world.

Progress without understanding is just moving forward in a random direction.

trashtime thumb The Purpose of Unit Testing

Getting back to the basics

Unit testing isn’t testing at all.

Unit testing, especially test driven development, is a design or implementation activity, not a testing activity.

You get two primary benefits from unit testing, with a majority of the value going to the first:

  1. Guides your design to be loosely coupled and well fleshed out.  If doing test driven development, it limits the code you write to only what is needed and helps you to evolve that code in small steps.
  2. Provides fast automated regression for refactors and small changes to the code.

I’m not saying that is all the value, but those are the two most important.

(Unit testing also gives you living documentation about how small pieces of the system work.)

Unit testing forces you to actually use the class you are creating and punishes you if the class is too big and contains more than one responsibility.

By that pain, you change your design to be more cohesive and loosely coupled.

You consider more scenarios your class could face and determine the behavior of those, which drives the design and completeness of your class.

When you are done, you end up with some automated tests that do not ensure the system works correctly, but do ensure the functionality does not change.

In reality, the majority of the value is in the act of creating the unit tests when creating the code.  This is one of the main reasons why it makes no sense to go back and write unit tests after the code has been written.

The flawed thinking

Here are some bad no-nos that indicate you don’t understand unit testing:

  • You are writing the unit tests after the code is written and not during or before.
  • You are having someone else write unit tests for your code.
  • You are writing integration or system tests and calling them unit tests just because they directly call methods in the code.
  • You are having QA write unit tests because they are tests after all.

Unit tests are a lot of work to write.  If you wanted to cover an entire system with unit tests with a decent amount of code coverage, you are talking about a huge amount of work.

If you are not getting the first primary value of unit testing, improving your design, you are wasting a crap load of time and money writing unit tests.

Honestly, what do you think taking a bunch of code you already wrote or someone else did and having everyone start writing unit tests for it will do?

Do you think it will improve the code magically just by adding unit tests without even changing the code?

Perhaps you think the value of having regression is so high that it will justify this kind of a cost?

I’m not saying not to add unit tests to legacy code.  What I am saying is that when you add unit tests to legacy code, you better be getting your value out of it, because it is hard work and costs many hours.

When you touch legacy code, refactor that code and use the unit tests to guide that refactored design.

Don’t assume unit tests are magic.

unicornmagic thumb The Purpose of Unit Testing

Unit tests are like guidelines that help you cut straight.  It is ridiculous to try and add guidelines to a word-working project after you have already cut the wood.

The True Cost of Quality Code

I saw a tweet by Robert Martin (UncleBobMartin) this weekend that said:

The problem is that people think that code quality is a short term loss but a long term benefit. They’re wrong about the first part.

It is kind of funny because I had just a conversation with a friend about whether or not actually doing test driven development is too costly.

quality The True Cost of Quality Code

Both viewpoints can be correct

To be honest, I have seen it argued both ways, and both can be correct:

  • Code quality is a short term loss
  • Code quality is not a short term loss

I have found that often, out of seeming contradictions, comes the most wisdom.  There is a piece missing from the equation here.  The piece that is missing is the experience of the developer in code quality best practices likes test driven development, self-documenting code, SOLID principles, etc.

If you take an average team of developers that have some familiarity with these kind of code quality best practices and require that they consciously follow them with every line of code they write, you will end up having a short term loss in productivity, but will end up with a long term gain.

If you take a team of developers who are used to writing tests first and applying other principles of code quality, and set them loose on a project, having them use those principles, you will end up with both a short term and a long term benefit.

A complex equation

There is an equation which can be used to demonstrate the benefit of quality code over time.

Let Overhead represent the overhead it takes for a developer to write quality code vs fast code.  100% would be same time.  120% would be a 20% overhead, etc.

Let QSavings represent the average savings in time for having quality code.  (This represents savings from maintainability, less bugs, readability, etc).  100% would be no savings, 80% would be a 20% savings.

Let PTime represent the total length of the project in weeks.

PTime – (PTime * Overhead * QSavings) = Saved Time

Let’s look at a 6 month project where developers have a 200% overhead for writing quality code, and the quality code gives a savings of 60%.

24 – (24 *  2 * .4) = 4.8 weeks

On average I would say that developers who are not used to doing things like writing tests first, will incur a 2 times overhead in writing quality code.  Over time this number actually diminishes, so long as they are strict about writing quality code.

If we take that same 6 month project and use developers that have only a 120% overhead for writing quality code, and the quality savings bumps up to around 70% because of their experience we really save some time.

24 – (24 * 1.2 * .3) = 15.36 weeks

Those numbers seem a little crazy.  Can a team of developers that are good at writing quality code really take a project down from 24 weeks to 9 weeks?  Yes, indeed I have seen it happen many times.  Those developers just have to be 3x as effective as normal.  I have seen it said that the difference between a great developer and an average one can be 10 to 1 ratio.  3x should be easily achieved.

Now I am using some fake numbers here in the equation, but let me justify a few of them for you.

First, the most controversial.  60-70% savings for having quality code.  Is that number real?  I tend to think it is.  When you look at the cost of fixing a bug at various stages of software development, it becomes pretty apparent that not producing a bug in the first place, or catching it when writing the code, can have savings that are easily 20x.

The typical overhead I have seen for developers that have never written unit tests is around 300%.  As they become more familiar with unit testing, mocks, and other techniques this drops to around 200%.  It is not until several years of practice that someone can achieve a small overhead like 120%, but it is definitely possible.

So, although the numbers are made up, they come from estimates based on my personal experience and what I have heard from others.

Technique takes practice

When I was a kid I remember my dad teaching me the right way to hold a baseball bat.  I didn’t like it because it wasn’t what I was used to.  It even made me a worse batter because the wrong way, that I was used to, was easier for me.  I had to practice the right way for a long time before it got me past the limit of how good I could get doing it the wrong way.

I had the same experience in high school as a pole-vaulter using a straight pole vs using a flexible pole.

I still type the wrong way and because of that I will never be able to type 120 words per minute, like I would be able to if I spend the time to relearn my typing.

Almost all professional bowlers use a hook.  I don’t know how to bowl with a hook, but I can perform decently without one.  If I tried to bowl with a hook, I would be worse at first.

I’ve heard of golfers hitting a wall and having to completely re-learn their golf swing to get past it.

Why would we think it is any different with programming?  Yes, a person can be fast and a pretty good at writing code the “wrong way,” but they will never achieve the speed and accuracy that is possible by learning the correct techniques.  It takes time and practice for this to be achieved.

As always, you can subscribe to this RSS feed to follow my posts on Making the Complex Simple.  Feel free to check out ElegantCode.com where I post about the topic of writing elegant code about once a week.  Also, you can follow me on twitter here.