Tag Archives: unit testing

The More I Know, the Less I Know

I used to be very confident in my abilities as a software developer.

I used to be able to walk up to a group of software developers and tell them exactly what they were doing wrong and exactly what was the “right” way to do things.

I used to be sure of this myself.

confident thumb The More I Know, the Less I Know

It wasn’t even that long ago.  Heck, when I look at the blog posts I wrote 3 years ago I have to slap myself upside my head in realization of just how stupid I was.

Not only was my writing bad, but some of my thoughts seem so immature and uneducated that it feels like a completely different person wrote them.

And I wrote those posts back when I knew it all.

The more I learn, the less I know

Lately I’ve been running into situations more and more often where I don’t have a good answer for problems.

I’ve found myself much more often giving 3 pieces of advice attached with pros and cons rather than giving a single absolute—like I would have done perhaps 3 years ago.

I’ve been finding as I have been learning more and more (the past 3 years have been an accelerated pace of growth for me,) that I am becoming less and less sure of what I know and more and more convinced that there is no such thing as a set of best practices.

I’ve even spent some time postulating on whether or not commonly held beliefs of best practices would be thrown completely out the window given a significant enough motivation to succeed.

My point is that the more doors I open, the more aware I become of the multitude of doors that exist.

doors thumb The More I Know, the Less I Know

It is not just the realization of what I don’t know, but also the realization of weakness of the foundation I am already standing on.

Taking it out of the meta-physical

Let’s drop down out of the philosophical discussion for a bit and talk about a real example.

Perhaps the biggest quandary I struggle with is whether or not to unit test or practice TDD and its variants.

The 3 years ago know-it-all version of me would tell you emphatically “yes, it is a best practice and you should definitely do it all the time.”

The more pragmatic version of me today says, in a much more uncertain tone, “perhaps.”

I don’t want to delve into the topic in this post since I am sure I could write volumes on my ponderings in this area, but I’ve come to a conclusion that it makes sense to write unit tests for code that has few or no dependencies and that it does not make sense to do so for other code.

From that I’ve also derived that I should strive to write code that separates algorithms from coordinators.

I still even feel today that my advice is not wholly sound.  I am convinced it is a better approach than 100% TDD and units tests, or no TDD and unit tests, but I am not convinced there isn’t a deeper understanding and truth that supersedes my current thoughts on the matter.

As you can imagine this is quite frustrating and unsettling.

Silver bullets and best practices

What I am coming to realize more and more is that there are no silver bullets and more surprisingly there are not even any such things as best practices.

silverbullet thumb The More I Know, the Less I Know

Now I’ve heard the adage of there being no silver bullets so many times that it makes me physically sick when I hear someone say it, because it is so cliché.

But, I’ve had a hard time swallowing the no best practices pill.

I feel like when I abandon this ship then I am sailing on my life raft in the middle of a vast ocean with no sails and no sense of what direction to go.

A corner-stone of my development career has been in the learning, applying and teaching of best practices.  If these things don’t exist, have I just been pedaling snake oil and drinking it myself?


Best practices are simply concrete applications of abstract principles in software development that we cannot directly grasp or see clearly enough to absolutely identify.

Breaking this down a bit, what I am saying is that best practices are not the things themselves to seek, but through the pursuit of best practices we can arrive at a better understanding of the principles that actually are unchanging and absolute.

Best practices are optimal strategies for dealing with the problems of software development based on a particular context.  That context is primarily defined by:

  • Language and technology choice
  • Meta-game (what other software developers and perceived best practices are generally in place and how software development is viewed and understood at a given time.)
  • Individual skill and growth (what keeps me straight might slow you down; depends on where you are in your journey.)

There is a gentle balance between process and pragmatism.

When you decide to make your cuts without the cutting guide, it can make you go faster, but only if you know exactly what you are doing.

Where I am now

Every time I open my mouth I feel like I am spewing a bunch of bull crap.

I don’t trust half of what I say, because I know so much of it is wrong.

Yet I have perhaps 10 times more knowledge and quite a bit more experience in regards to software development than I did just 3 years ago.

So what gives?

Overall, I think I am giving better advice based on more practical experience and knowledge, it is just that I am far more aware of my own short-comings and how stupid I am even today.

I have the curse and blessing of knowing that only half of what I am saying has any merit and the other half is utter crap.

Much of this stems from the realization that there are no absolute right ways to do things and best answers for many of the problems of software development.

I used to be under the impression that someone out there had the answer to the question of what is the right way to develop software.

clues thumb The More I Know, the Less I Know

I used to think that I was picking up bit of knowledge, clues, that were unraveling the mystery of software development.  That someday I would have all the pieces of understanding and tell others exactly how they should be developing software.

What I found instead was that not only does nobody know the “right” way to develop software, but that it is perhaps an unknowable truth.

The best we can do is try to learn from obvious mistakes we have made before, start with a process that has had some level of success, and modify what we do based on our observations.

We can’t even accurately measure anything about software development and to think we can is just foolishness.

From story points, to velocity, to lines of code per defect and so on and so forth, all of those things are not only impossible to accurately measure, but they don’t really tell us if we are doing better or not.

So, what is my point?

My point is simple.

I have learned that not only do I not have all the answers, but I never will.

What I have learned is always subject for debate and is very rarely absolute, so I should have strong convictions, but hold onto them loosely.

And most importantly, don’t be deceived into thinking there is a right way to develop software that can be known.  You can improve the way you develop software and your software development skills, but it will never be based on an absolute set of rules that come down from some magical process or technology.

If you like this post don’t forget to or subscribe to my RSS feed.

There Are Only Two Roles of Code

All code can be classified into two distinct roles; code that does work (algorithms) and code that coordinates work (coordinators).

The real complexity that gets introduced into a code bases is usually directly related to the creation of classes that group together both of these roles under one roof.

I’m guilty of it myself.  I would say that 90% of the code I have written does not nicely divide my classes into algorithms and coordinators.

Defining things a bit more clearly

Before I dive into why we should be dividing our code into clear algorithmic or coordinating classes, I want to take a moment to better define what I mean by algorithms and coordinators.

Most of us are familiar with common algorithms in Computer Science like a Bubble Sort or a Binary Search, but what we don’t often realize is that all of our code that does something useful contains within it an algorithm.

What I mean by this is that there is a clear distinct set of instructions or steps by which some problem is solved or some work is done.  That set of steps does not require external dependencies, it works solely on data, just like a Bubble Sort does not care what it is sorting.

Take a moment to wrap your head around this.  I had to double check myself a couple of times to make sure this conclusion was right, because it is so profound.

It is profound because it means that all the code we write is essentially just as testable, as provable and potentially as dependency free as a common sorting algorithm if only we can find the way to express it so.

What is left over in our program (if we extract out the algorithms) is just glue.

Think of it like a computer.  Computer electronics have two roles: doing work and binding together the stuff that does the work.  If you take out the CPU, the memory and all the other components that actually do some sort of work, you’ll be left with coordinators.  Wires and busses that bind together the components in the system.

Why dividing code into algorithms and coordinators is important.

So now that we understand that code could potentially be divided into two broad categories, the next question of course is why?  And can we even do it?

Let’s address why first.

The biggest benefit to pulling algorithmic code into separate classes from any coordinating code is that it allows the algorithmic code to be free of dependencies.  (Practically all dependencies.)

Once you free this algorithmic code of dependencies you’ll find 3 things immediately happen to that code:

  1. It becomes easier to unit test
  2. It becomes more reusable
  3. Its complexity is reduced

A long time ago before mocks were widely used and IoC containers were rarely used, TDD was hard.  It was really hard!

I remember when I was first standing on the street corners proclaiming that all code should be TDD with 100% code coverage.  I was thought pretty crazy at the time, because there really weren’t any mocking frameworks and no IoC containers, so if you wanted to write all your code using TDD approaches, you’d actually have to separate out your algorithms.  You’d have to write classes that had minimal dependencies if you wanted to be able to truly unit test them.

Then things got easier by getting harder.  Many developers started to realize that the reason why TDD was so hard was because in the real world we usually write code that has many dependencies.  The problem with dependencies is that we need a way to create fake versions of them.  The idea of mocking dependencies became so popular that entire architectures were based on the idea and IoC containers were brought forth.

mp900175522 thumb There Are Only Two Roles of CodeWe, as a development community, essentially swept the crumbs of difficult unit testing under the rug.  TDD and unit testing in general became ubiquitous with writing good code, but one of the most important values of TDD was left behind, the forced separation of algorithmic code from coordinating code.

TDD got easier, but only because we found a way to solve the problems of dependencies interfering with our class isolation by making it less painful to mock out and fake the dependencies rather than getting rid of them.

There is a better way!

We can still fix this problem, but we have to make a concerted effort to do so.  The current path of least resistance is to just use an IoC container and write unit tests full of mocks that break every time you do all but the most trivial refactoring on a piece of code.

Let me show you a pretty simple example, but one that I think clearly illustrates how code can be refactored to remove dependencies and clearly separate out logic.

Take a look at this simplified calculator class:

 public class Calculator
        private readonly IStorageService storageService;
        private List<int> history = new List<int>();
        private int sessionNumber = 1;
        private bool newSession;

        public Calculator(IStorageService storageService)
            this.storageService = storageService;

        public int Add(int firstNumber, int secondNumber)
                newSession = false;

            var result = firstNumber + secondNumber;

            return result;

        public List<int> GetHistory()
            if (storageService.IsServiceOnline())
                return storageService.GetHistorySession(sessionNumber);

            return new List<int>();

        public int Done()
            if (storageService.IsServiceOnline())
                foreach(var result in history)
                    storageService.Store(result, sessionNumber);
            newSession = true;
            return sessionNumber;


This class does simple add calculations and stores the results in a storage service while keeping track of the adding session.

It’s not extremely complicated code, but it is more than just an algorithm.  The Calculator class here is requiring a dependency on a storage service.

But this code can be rewritten to extract out the logic into another calculator class that has no dependencies and a coordinator class that really has no logic.

 public class Calculator_Mockless
        private readonly StorageService storageService;
        private readonly BasicCalculator basicCalculator;

        public Calculator_Mockless()
            this.storageService = new StorageService();
            this.basicCalculator = new BasicCalculator();

        public int Add(int firstNumber, int secondNumber)
            return basicCalculator.Add(firstNumber, secondNumber);

        public List<int> GetHistory()
            return storageService.

        public void Done()
            foreach(var result in basicCalculator.History)
                     .Store(result, basicCalculator.SessionNumber);


    public class BasicCalculator
        private bool newSession;

        public int SessionNumber { get; private set; }

        public IList<int> History { get; private set; }

        public BasicCalculator()
            History = new List<int>();
            SessionNumber = 1;
        public int Add(int firstNumber, int secondNumber)
            if (newSession)
                newSession = false;

            var result = firstNumber + secondNumber;

            return result; ;

        public void Done()
            newSession = true;


Now you can see that the BasicCalculator class has no external dependencies and thus can be easily unit tested.  It is also much easier to tell what it is doing because it contains all of the real logic, while the Calculator class has now become just a coordinator, coordinating calls between the two classes.

This is of course a very basic example, but it was not contrived.  What I mean by this is that even though this example is very simple, I didn’t purposely create this code so that I could easily extract out the logic into an algorithm class.

Parting advice

I’ve found that if you focus on eliminating mocks or even just having the mindset that you will not use mocks in your code, you can produce code from the get go that clearly separates algorithm from coordination.

I’m still working on mastering this skill myself, because it is quite difficult to do, but I believe the rewards are very high for those that can do it.  In code where I have been able to separate out algorithm from coordination, I have seen much better designs that were more maintainable and easier to understand.

I’ll be talking about and showing some more ways to do this in my talk at the Warm Crocodile conference next year.

Back to Basics: What is an Interface?

This is the first part of my Back to Basics series.

One of the basics I feel we really need to get back to is the use and understanding of the value of interfaces.

In languages like C# and Java, interfaces are extremely common.  They are much more commonly used than they were 5-10 years ago.

But a question we have to ask ourselves is, “are we using them correctly?”

What problem does the interface solve?

I want you to take a second and clear your head of how you are currently using interfaces.

I want you to pretend for a moment that you don’t know what an interface is.


The basic problem an interface is trying to solve is to separate how we use something from how it is implemented.

Why do we want to separate the use from the implementation?

So that we can write code that can work with a variety of different implementations of some set of responsibilities without having to specifically handle each implementation.

To put this more simply, this means that if we have a Driver class it should be able to have a method Drive that can be used to drive any car, boat, or other kind of class that implements the IDriveable interface.

The Driver class should not have to have a DriveBoat, DriveCar or DriveX methods for each kind of class that supports the same basic operations that are needed for it to be driven.

driver race car thumb Back to Basics: What is an Interface?

Interfaces are trying to solve a very specific problem by allowing us to interact with objects based on what they do, not how they do it.

Interfaces are contracts

Interfaces allow us to specify that a particular class meets certain expectations that other classes can rely on.

If we have a class that implements an interface, we can be sure that it will support all the methods that are defined in that interface.

At first glance interfaces seem to be similar to concrete inheritance, but there is a key difference.

Concrete inheritance says Car is an Automobile, while an interface says Car implements the Drivable interface.

When a class implements an interface, it does not mean that class IS that interface.  For this reason interfaces that completely describe the functionality of a class are usually wrong.

A class can implement multiple interfaces because each interface only talks about a particular contract that class is able to fulfill.

Interfaces are always implemented by more than one class

You might be saying “no they’re not, I have a class here that has an interface that no other class implements.”

To that I say, “you are doing it wrong.”

But, don’t worry, you are not alone.  I am doing it wrong also.  Many of us are not using interfaces correctly anymore, but are using them instead because we are under the impression that we should never use a concrete class directly.

We are afraid of tightly coupling our application, so instead we are creating interfaces for every class whether or not we need an interface.

There are some really good reasons why I say that interfaces are always implemented by more than one class.

Remember how we talked about how interfaces are designed to solve a particular problem?

In my example, I talked about how the Driver class shouldn’t have to have a method of each kind of class it can drive, instead it should depend on an IDriveable interface and have one generic Drive method that can drive anything that implements IDrivable.

Most of us accept the YAGNI principle which says “You Ain’t Gonna Need It.”  If we only have a Car class and we don’t have any other classes that need to be driven by the Driver class, we don’t need an interface.  YAGNI!

At some point we may later add a Boat class.  Only at that point in time do we actually have a problem that the interface will solve.  Up until that point adding the interface is anticipating a future problem to solve.

If you think you are good at anticipating when you will need an interface, I want you to do a little exercise.  Go into your codebase and count all the interfaces you have.  Then count all the classes that implement those interfaces.  I bet the ratio is pretty close to 1 to 1.

But how will I test?  How will I use dependency injection?

These two reasons are probably the most justified causes for incorrectly using interfaces.

I am guilty of justifying the creation of an interface so that I can have something to mock, and I am guilty of creating an interface just for my dependency injection framework, but it doesn’t make it right.

I can’t give you an easy answer here and say that I can solve your unit testing or dependency injection problems without an interface, but I can talk about why we shouldn’t be bending the source code to fit the tool or methodology.

I talked about the purpose of unit testing before, and one of the key benefits being that unit tests help guide your design.  Unit tests help us to decouple our application and consolidate our classes to single responsibilities by making it really painful to try and unit test classes with multiple dependencies.

Interfaces are kind of a shortcut that allows us to get rid of having lots of dependencies in a class.

When we turn a reference to a concrete class into an interface reference, we are cheating the system.  We are making it easier to write a unit test by pretending that our class is decoupled because it references an interface instead of a concrete class.  In reality it is not decoupled it is actually more coupled because our class is coupled to an interface which is coupled to a class.  All we did was add a level of indirection.

Dependency injection promotes the same problem of interface abuse.  At least it does in the way it is used in C# and Java today.  Creating an interface solely for the purpose of being able to inject the only implementation of that interface into a class creates an unnecessary level of indirection and needlessly slows down the performance of our application.

Don’t get me wrong.  Dependency injection is good.  I’ll save the details for another post, but I believe dependency injection’s real benefit is when it is used to control which implementation of an interface is used, not when there is only one implementation of an interface.

Ultimately, I can’t give you a good answer of how do you unit test or use dependency injection without abusing interfaces.  I think you can reduce the abuse by choosing to split apart classes and actually reduce dependencies rather than simply creating an interface and injecting it into the class, but you are still going to have the problem that a Car has an Engine and if you want to unit test the car, you are either going to have to use the real engine or find a way to mock it.

The key problem here is that interfaces are part of the language, but unit testing and dependency injection are not.  We are trying to make them fit in with the language by using a trick.  The trick is we create an interface to provide a seam between classes.  The problem is that we dilute the potency of an interface by doing so.  What we really need is a language supported seam to allow us to easily replace implementations of concrete classes at runtime.

The Purpose of Unit Testing

I was reminded yesterday that there are still many people out there who still don’t really understand the purpose of unit testing.

A funny shift happened in the last 5 or so years.

About 5 years ago, when I would suggest TDD or just doing some unit testing when creating code, I would get horrible responses back.  Many developers and managers didn’t understand why unit testing was important and thought it was just extra work.

More recently when I have heard people talking about unit testing, almost everyone agrees unit testing is a good idea, but not because they understand why, but because it is now expected in the programming world.

Progress without understanding is just moving forward in a random direction.

trashtime thumb The Purpose of Unit Testing

Getting back to the basics

Unit testing isn’t testing at all.

Unit testing, especially test driven development, is a design or implementation activity, not a testing activity.

You get two primary benefits from unit testing, with a majority of the value going to the first:

  1. Guides your design to be loosely coupled and well fleshed out.  If doing test driven development, it limits the code you write to only what is needed and helps you to evolve that code in small steps.
  2. Provides fast automated regression for refactors and small changes to the code.

I’m not saying that is all the value, but those are the two most important.

(Unit testing also gives you living documentation about how small pieces of the system work.)

Unit testing forces you to actually use the class you are creating and punishes you if the class is too big and contains more than one responsibility.

By that pain, you change your design to be more cohesive and loosely coupled.

You consider more scenarios your class could face and determine the behavior of those, which drives the design and completeness of your class.

When you are done, you end up with some automated tests that do not ensure the system works correctly, but do ensure the functionality does not change.

In reality, the majority of the value is in the act of creating the unit tests when creating the code.  This is one of the main reasons why it makes no sense to go back and write unit tests after the code has been written.

The flawed thinking

Here are some bad no-nos that indicate you don’t understand unit testing:

  • You are writing the unit tests after the code is written and not during or before.
  • You are having someone else write unit tests for your code.
  • You are writing integration or system tests and calling them unit tests just because they directly call methods in the code.
  • You are having QA write unit tests because they are tests after all.

Unit tests are a lot of work to write.  If you wanted to cover an entire system with unit tests with a decent amount of code coverage, you are talking about a huge amount of work.

If you are not getting the first primary value of unit testing, improving your design, you are wasting a crap load of time and money writing unit tests.

Honestly, what do you think taking a bunch of code you already wrote or someone else did and having everyone start writing unit tests for it will do?

Do you think it will improve the code magically just by adding unit tests without even changing the code?

Perhaps you think the value of having regression is so high that it will justify this kind of a cost?

I’m not saying not to add unit tests to legacy code.  What I am saying is that when you add unit tests to legacy code, you better be getting your value out of it, because it is hard work and costs many hours.

When you touch legacy code, refactor that code and use the unit tests to guide that refactored design.

Don’t assume unit tests are magic.

unicornmagic thumb The Purpose of Unit Testing

Unit tests are like guidelines that help you cut straight.  It is ridiculous to try and add guidelines to a word-working project after you have already cut the wood.

Living Dangerously: Refactoring without a Safety Net

It’s usually a good idea to have unit tests in place before refactoring some code.

I’m going to go against the grain here today though and tell you that it is not always required.

Many times code that should be refactored doesn’t get refactored due to the myth that you must always have unit tests in place before refactoring.

In many cases the same code stays unimproved over many revisions because the effort of creating the unit tests needed to refactor it is too high.

I think this is a shame because it is not always necessary to have unit tests in place before refactoring.

manonwire3 thumb Living Dangerously: Refactoring without a Safety Net

Forgoing the safety net

If you go to the circus, you will notice that some acts always have a safety net below because the stunt is so dangerous that there is always a chance of failure.

You’ll also notice that some acts don’t have a safety net because even though there is risk of danger, it is extremely small, because of the training of the performers.

Today I’m going to talk about some of the instances where you don’t necessarily need to have a safety net in place before doing the refactor.

Automatic refactoring

This is an easy one that should be fairly obvious.  If you use a modern IDE like Visual Studio, Eclipse, or IntelliJ, you will no doubt have seen what I call “right-click refactor” options.

Any of these automatic refactors are pretty much safe to do anytime without any worry of changing functionality.  These kinds of automated refactors simply apply an algorithm to the code to produce the desired result and in almost all cases do not change functionality.

These refactoring tools you can trust because there is not a chance for human error.

Any time you have the option of using an automatic refactoring, do it!  It just makes sense, even if you have unit tests.  I am always surprised when I pair up with someone and they are manually refactoring things like “extract method” or “rename.”

Most of the time everything you want to do to some code can be found in one of the automatic refactoring menus.

Small step refactors

While not as safe as automatic refactors, if you have a refactor that is a very small step, there is a much higher chance your brain can understand it and prevent any side effects.

A good example of this would be my post on refactoring the removal of conditions.

The general idea is that if you can make very simple small steps that are so trivial that there is almost no chance of mistake, then you can end up making a big refactor as the net effect of those little changes.

This one is a judgment call.  It is up to you to decide if what you are doing is a small step or not.

I do find that if I want to do a refactor that isn’t a small step refactor, I can usually break it down into a series of small steps that I can feel pretty confident in.  (Most of the time these will be automated refactors anyway.)

Turning methods into classes

I hate huge classes.  Many times everyone is afraid to take stuff out of a huge class because it is likely to break and it would take years to write unit tests for that class.

One simple step, which greatly improves the architecture and lets you eventually create unit tests, is to take a big ol’ chunk of that class, move it to a new class, and keep all the logic in there exactly how it is.

It’s not always totally clean, you might have to pass in some dependencies to the new method or new class constructor, but if you can do it, it can be an easy and safe refactor that will allow you to write unit tests for the new class.

Obviously this one is slightly more dangerous than the other two I have mentioned before, but it also is one that has a huge “bang for your buck.”

Unit tests, or test code themselves

Another obvious one.  Unless you are going to write meta-unit tests, you are going to have to live a little dangerously on this one.  You really have no choice.

I think everyone will agree that refactoring unit tests is important though.   So, how come no one is afraid to refactor unit tests?

I only include this example to make the point that you shouldn’t be so scared to refactor code without unit tests.  You probably do it pretty frequently with your unit tests.

I’m not advocating recklessness here

I know some of you are freaking out right now.

Be assured, my message is not to haphazardly refactor code without unit tests.  My message is simply to use temperance when considering a refactor.

Don’t forgo a refactor just because you are following a hard and fast rule that you need unit tests first.

Instead, I am suggesting that some refactorings are so trivial and safe that if it comes between the choice of leaving the code as it is because unit testing will take too long, or to refactor code without a safety net, don’t be a… umm… pu… wimp.  Use your brain!

Things that will bite you hard

There are a few things to watch out for, even with the automatic refactoring.  Even those can fail and cause all kinds of problems for you.

Most of these issues won’t exist in your code base unless you are doing some crazy funky stuff.

  • If you’re using dynamic in C#, or some kind of PInvoke, unsafe (pointer manipulation) or COM interop, all bets are off on things like rename.
  • Reflection.  Watch out for this one.  This can really kick you in the gonads.  If you are using reflection, changing a method name or a type could cause a failure that is only going to be seen at runtime.
  • Code generation.  Watch out for this one also.  If generated code is depending on a particular implementation of some functionality in your system, refactoring tools won’t have any idea.
  • External published interfaces.  This goes without saying, but it is so important that I will mention it here.  Watch out for other people using your published APIs.  Whether you have unit tests or not, refactoring published APIs can cause you a whole bunch of nightmares.

This list isn’t to scare you off from refactoring, but if you know any of the things in this list are in your code base, check before you do the refactor.  Make sure that the code you are refactoring won’t be affected by these kinds of things.

When Process Improvements Don’t Make Sense

Since I joined the team at TrackAbout, I have been rethinking some of my ideas about process improvement.

I have always been a big advocate of test driven development, static code analysis, and other best practices in software development.

A large portion of time at previous jobs I have either spent time explicitly or implicitly in the role of improving the development process.  It’s not really a fun place to be, but I’ve always felt a compelling moral conviction to take up that job and make things right.

continuousimprovement thumb When Process Improvements Don’t Make Sense

When process improvement doesn’t work

The team I am working on at TrackAbout is full of developers who are really some of the brightest I have worked with.  They not only have vast experience, but a depth of knowledge, especially in best practices.

Applying the same blanket development process improvement steps just doesn’t seem to make sense here.

What I am talking about are things like:

  • Code must have 90% code coverage with unit tests.
  • FxCop must have no new warnings.
  • You can’t be working on two backlogs at the same time.
  • Backlogs must be broken down into tasks of no more than x hours.

On most of the teams I have worked, those types of “rules” were things that really helped improve the code quality the team was producing.

For the first time ever I am feeling like introducing “rules” into a development team would actually hurt the productivity and quality of the code.

Very good developers seem to have the judgment sense to make the right kind of decisions about how much code coverage they need to have and which kind of static analysis rules are important.

No set of development process rules can seem to capture the value of that kind of judgment.

I feel like where I work everyone is self-conscious about every line of code they write, they don’t have to be told to be self-conscious about it.  It really is something I’ve never experienced at any other place.

Most places I have worked there might have been 1 or 2 other kindred spirits with my passion for “doing things right” out of all the developers at the company, but honestly here at TrackAbout, every developer shares that same passion.

So am I saying development processes are unnecessary?

By all means “NO!”  But what I am saying is sometimes the level of rigidity of them is not needed.

When I have been in charge of setting up development processes in the past, I have often been asked the question of “why 90% code coverage for unit tests?”

Having 90% code coverage on unit tests is not magic.  It won’t suddenly make your code good, but it will usually work to ensure a few things:

  • You have a measurable, enforceable way of making sure unit tests are actually getting written.
  • If developers are going through the trouble to get to 90% code coverage, they are more likely writing good unit tests to actually exercise the code.  (Not always true, but definitely more likely.)
  • It prevents the need for a judgment call by people who shouldn’t be making judgment calls.  You can’t trust every developer to make good judgments, so by imposing an arbitrary rule, you are able to draw a line in the sand which is mostly the correct place to be.

There are other benefits and reasons why imposing development process rules can be beneficial.  Honestly, for most teams I would highly recommend them.

What I am doing in this post is questioning my own thoughts about applying them to a team of over-achievers.

If not a rigid set of rules, then how do you improve process?

That is the question I am struggling with now.  To be honest, I don’t have a great answer yet.

It seems like the principles in my Kanbanand Guide still mostly apply, even in this situation, but…

I am still pondering this question.

I feel like there always needs to be some kind of process improvement going on.  Continual improvement is very important.  No team is exempt from the benefits of process improvement.

The other problem with improvement process on highly performing teams, is that no one is sitting around talking about improving process.  Instead, everyone is getting things done.  Normally process improvement talks come about due to necessity, or because it is apparent there is a problem that needs to be improved.

I am sure I will write on this topic more as I find out answers myself to these questions.

The Best Way to Unit Test in Android: Part 2

In my last post I presented two choices for unit testing in Android.

  • Unit test on the real device or emulator using the real android framework
  • Unit test on our PC using the JVM

Each choice has some pros and cons, but for me it came down to the speed and flexibility allowed by running in a real JVM.  I actually tried to create an Android unit testing project using the scaffolding provided by Google but it turned out to be very restrictive, especially since I couldn’t use JMock.

There is also something to be said for being able to execute the unit tests extremely quickly.  If you have to wait for an emulator to come up, or try to run them on a real device, it is less likely the unit tests will be run.

How to do it

The basic idea of this approach is fairly simple.

We are going to try and abstract away any Android specific calls from our application and then unit test the plain Java logic in the application just like we would any other Java project.

Step 1:  Create a JUnit project

Create a new JUnit project just like you would for unit testing any Java application.

newjavaproject thumb The Best Way to Unit Test in Android: Part 2

Make sure you chose Java Project rather than Android Test Project.

From here you can add your JUnit references to your build path.  I would also recommend adding Instinct for doing BDD style testing.  Instinct uses JMock internally to create mocks for you using a declarative approach.

The only other thing you will want to do with this project is to add the android.jar to the build path.  The android.jar can be found in your Android SDK directory under the platform directory that corresponds to the API version you are targeting with your application.

For example, on my machine, I am targeting Android 1.6, so my android.jar is at:  C:\Android\android-sdk-windows\platforms\android-4.

Now remember, you are including the android.jar in the project only so your classes can understand what types exist in the Android framework.  If you even try to call a constructor on a class in android.jar, you will get an exception.

Step 2: Pull all the code you can out of the Activity.

The next thing we need to do is clean up our Activity class to make sure that it has the smallest amount of logic in it as possible.

Since we can’t actually instantiate any types that exist in the Android framework, we are not going to be able to create an instance of our Activity in the test project.  So we are not going to be able to unit test our Activity.  This isn’t a problem if most of the logic we care about testing exists in other layers in our system.

You can design the lower layers however you want.  I am still not quite sure the best approach here, but what I did was put a presentation layer directly below the Activity.  I have a Presenter class directly below the Activity that tells my activity what to do.  The Activity passes on any interesting events or information to the Presenter.

You should leave the Activity with the responsibility of setting the text on its own view objects, but create methods for your lower layers to tell the Activity what text to set the objects to and so forth.  Activity is going to act as an adapter to a large portion of the Android framework for your application.

Because the entrance point into your application is going to be the Activity, you will need to wire up the rest of your layers from the Activity.  This is a little strange, but it works.  In my application, I create a new Presenter and pass this into it on the constructor.

When you are done, your Activity should look pretty thin.  It should delegate all its event handler logic down to the lower layers and act as a pass through for manipulating the view.

Step 3: Wrap remaining Android object calls.

You’re going to find that some of the remaining Android API will need to be called from lower down than the Activity.  This is not as big of a problem as it seems.

In my application I needed to make some calls to android.os.SystemClock.  I needed to make those calls lower down than the activity because it is part of the heart of the logic of my application.

I created a simple SystemClockWrapper class that wraps system clock and delegates any calls to the real SystemClock class.  Then I extracted an interface from that class called ISystemClock (hey, I like the C# convention for interfaces.)

Finally, in my application logic, I allowed the ISystemClock reference to be passed in or “injected” through the constructor of the class that used it.

public class SystemClockWrapper implements ISystemClock
     public long getElapsedRealTime()
          return SystemClock.elapsedRealtime();


public interface ISystemClock
     public abstract long getElapsedRealTime();


In my unit test I just pass in a mock version of the ISystemClock interface.

One hint here is to use the refactor tool in your IDE to extract an interface.  This can save you some time of trying to manually create interfaces for large pieces of functionality you are wrapping.

Putting it all together

Now you should be able to create your unit tests against most of your application logic and run them just like any other unit test project.

The only thing you won’t be able to do is unit test the logic in your Activity classes.  This shouldn’t be much of a problem if you have made them into views, because they should only contain view logic.

If you really want to be able to write some tests for the Activity though, you can create another actual Android Test Project and use Google’s framework to write unit tests from there.

The Best Way to Unit Test in Android: Part 1

I’ve been doing some development in Android lately on a top secret project, one that hopefully will change the way you run with your phone.

In the course of building this app, in a previous post I mentioned that I wanted to find the right, or perfect way, to build an Android application.

I haven’t found the best way to build an Android application into a nice clean design pattern, but have found a way that seems to work, and makes the application testable and easy to maintain.

I do believe though, that I have found the optimal way to unit test in Android right now.  Yes, a bold statement, but I have looked high and low for a better solution, and can’t find one.

htcandroid thumb The Best Way to Unit Test in Android: Part 1

The problems

So first a little bit of background on the problem with unit testing in Android.  Take what I say with a grain of salt here, because I am not an Android platform expert, and feel free to correct me if I misstate something.


If you download the Android SDK from google, you will find that the android.jar you get with the SDK is much like a C++ header file; it doesn’t actually contain any working code.  As a matter of fact, all the methods are stubbed out to throw an exception with the message “Stub!” when you call them.  How cute.

The real android.jar implementations live on the emulator, or your real Android device.  So, if you want to actually run any code that is going to call any methods or try to construct real Android framework classes, you must run that code inside the emulator or a real device.

Dalvik VM

When you’re working with Android, it sure feels like you are writing regular old standard Java SE code, but you are not. You probably won’t notice the difference, as almost everything that you need is there in the Dalvik VM implementation of Java.

Dalvik is not even a Java Virtual Machine at all.  That is right, it runs on cross-complied .class files which are converted to a .dex format.  Yes, it does not use java byte code.

Why do I mention all this?  Because you might want to use something like JMock to mock your dependencies when writing your unit tests.

Well, you can’t.  It just isn’t going to work, because the Dalvik VM doesn’t use java byte code, so the reflective coolness that mocking frameworks like JMock use doesn’t work the same.

Be aware that any external library you try to use in your Android project may not work because Android does not support the full Java SE implementation.  It actually is a subset of the Apache Harmony Java implementation.

There is no main()

Where does an Android application start running?  From the activity.  Which is basically the view.  (Some people will argue this is the controller or presenter.  And yes, I agree in respect to the Android framework it is, but in regard to your application framework it is the view.)

Android applications define a starting activity and launch from there.  Activities can even be launched from other applications.

This tends to disrupt most of our MVC, MVP, MVVP pattern architectures, as the view is going to be the entry point and will have to be responsible for initializing the rest of the application.  (That is not entirely true, as there is an android.app.Application class that gets called the first time your app is run, to launch the main activity, but for the most part you can’t do much here.)

Essentially though, you have to build your architecture based on each Activity being its own separate little application, with the entry point being in the activity.  This puts some serious constraints on unit testing.

Putting it all together

So if I can sum up the problems briefly, I would say:

  • Android framework code has to run in the emulator or on a device.
  • Dalvik VM doesn’t allow us to use our standard mocking frameworks.
  • Entry points for our applications are in the views.

The first problem, combined with the second, lead us to an interesting choice.  Should we run our unit tests on an actual emulator or device using the Dalvik VM, or should we run them in a JVM?

It is probably not an obvious question, but let me explain why it is the most relevant.

In writing an application, we are going to have application logic that has nothing specifically to do with the Android platform, and we are going to have Android platform specific logic (drawing views, handling Android OS events, interacting with Android APIs etc.)

If we want to write true unit tests, we need to isolate our classes and test them individually.  We should be able to do this for our application logic, without relying on the Android framework.  If we don’t rely on the Android framework, we don’t need to run on a real or emulated device, thus we are not constrained to the Dalvik VM.

If we choose to run our unit test code on a real or emulated device:

  • We will be able to use the Android framework APIs in our testing efforts.  For example, we can create new location objects instead of mocking them up.
  • We will be completely true to the real execution of our code since we will be using the real VM the code will run on.
  • Since we are running our tests on a device or emulator, they will run much slower.
  • We won’t be able to use JMock, EasyMock, or Mockito, we’ll either have to roll our own mocks or use a fledgling Android mocking framework.

If we chose to run our unit test code in a JVM on our PC:

  • We will have the full power of the JVM available to our test code, so we can use mocking frameworks like JMock, and BDD frameworks like Instinct.
  • We will run our unit tests much faster, since they will be using our PC instead of a device.
  • We can use standard unit testing practices and not have to inherit from special Android classes, or use special Android test runners.
  • We will have to wrap any calls to the actual Android framework if we need to use any Android classes or services deeper down in our application.
  • We have a small risk of having different behavior between running the tests and the real application because we will be running the code on different VMs.

In my next post, I’ll detail which option I chose and why and also give some detailed steps of how to get setup and running.

What to Automate – Developer Tools

automation What to Automate   Developer ToolsOkay, so you’re convinced you need a developer tools team, or at least that development tools are important to build.

Now, what kind of tools do you build?

There is no answer that works universally, otherwise you wouldn’t really need to build anything.  The key is finding what things your team is doing frequently and automating those things, or finding the things the team wants to do, but is unable to do.

One thing we can do is look at some common areas where custom tools are likely to be needed.


In many places I have worked and many software development shops I have visited, there always were plenty of logs.  The problem with logs is they tend to be large, hidden on a server somewhere, and hard to read.

One area of opportunity is a tool to parse and extract important information out of logs and put it in a format that is more digestible to someone troubleshooting a problem or doing analytics.  One particular area of interest here is log messages dealing with uncaught exceptions.  It is always good to automate a way of reporting on those types of problems.

Applications also often lack the log information for certain important parts of the application or processing that you only find you need once you are running in production.  Tools to instrument code and insert logging into your application while it is deployed are very useful and can be a life saver in understanding that error that only happens in production at 3:00 AM.

Most web servers have logs of HTTP requests that come in from users.  This information can be very useful in determining how users are actually using your application.  Combining this information with session ids and timestamp data from your database, can provide you with exact click flows for users and answer many questions like “how the heck did this data get created?”, or “how are they creating this problem?”  Tools to aggregate this data and create representations of the user’s actions can be invaluable.

Developer Build and Deploy

Developers build and deploy the application many times a day on average.  This is one of the most commonly repeated activities.  Yet, I find that in many organizations this is a complex process or takes a very long time.

Consider writing tools to dynamically load parts of your application so that only that part needs to be built.  Good candidates here are sections of code that change frequently and are isolated.

Deploying to the web server should be a one button or one command operation.  This is often a very “low hanging fruit” that can provide large benefits by saving time.

There are often many areas of the build process itself that can benefit from some tool that will speed it up.  Examine your own build and see what is taking the longest amount of time.  Is there some way to speed that up by developing a tool, or using your build tool in a better way?

Look carefully at the process developers are using to pull down new code, build code, write code, test code, and check it back in.  Look at each step in the process.  Try to find areas that are being repeated, which do not need to be repeated over and over.  One really good thing to look for is some process that has to be started, followed by a wait, then another thing that has to be done after the wait. When you see this pattern, making the process that has to be done after the wait automatically happens usually has a great return on investment.


There are huge gains to be made in this area with the proper tools.  I have talked several times about the automation frameworks I have built, but there is much more in this area than just that.

Consider the creation of test cases.  Where are they stored?  How are they written?  Closely examining this process normally yields interesting results in finding duplication and wasted effort.  Copying and pasting rows from spreadsheets takes time and is inefficient.  If you are going to write and maintain manual tests, then having some tools to help you do that can make a big impact.

Testing itself can usually be automated.  Take a good look at what you are testing.  There are some things that cannot be automated, but usually if you can give something a pass or fail result, it can be automated.  My best advice here is to use an automation framework, and build a custom application specific framework on top of it.

Tools to manage where code is deployed and test environments can be very useful depending on the process your organization is using.  Take a look at the process involved in determining where features exist in what environments to look for areas of optimization.  Take a look at the process of getting code to different test environments.


Many people don’t think about the automation of code development, but there are often many areas where tools are useful.  Consider some of the commercial tools that automatically refactor code or generate UI’s.

The first place I would start here is looking for any code that developers have to write that is boiler plate code.  Good examples are code that serializes or deserializes specific objects to or from a location, or any kind of code that is a mimic of some external structure.  Often tools can be written to generate code once the pattern is well understood.  Code generation tools are extremely useful because they are able to reduce the total amount of code that has to be maintained and they save the time of creating that code.

Unit testing tools or scaffolding can also be useful for setting up basic scenarios or data structures that often have to be set up in many unit or integration level tests.  Carefully look at the creation of new unit tests for your code base, and determine where there is common code that is being written to set up data, or mock something out.


There often arises the need to do many things with application data, whether that be transporting it to another environment or obfuscating it to protect sensitive information, or some other purpose.

Look carefully at the things you do with data, or what you would like to be able to do with data from you application.  If you have a DBA that is doing lots of “stuff”  you can probably find tools to help him do that “stuff” and bring it down to the teams.  Relying on DBAs doing “stuff” puts your software development team over a barrel.

Often teams need to transport portions of data from environments to test an issue that is happening in production.  There are opportunities to create tools that can move the data and change personal information to protect that data.

The process of altering the database structure is also a good one to consider.  How do developers make changes to the database structure?  Is it a long and lengthy process requiring many steps?

Parting words

These are just a few of the areas you should look at when thinking about what kind of tools can be created to help automate your processes.  The key here is to look for the processes that are being done and to try and find ways to eliminate or automate steps.  It is not always cost effective to build a tool.  It depends on the size of the organization, the effort in building the tool, and frequency of use.  But I have found that there are many opportunities to improve efficiency by automating that are not apparent until you look.

The Ego Test – Test Cases That Just Won’t Fail

You know those tests you like to run?  You know what I mean, those test cases that always give you a green bar and never fail?

If you are having a bad day, and you feel like nothing is going right, you can just “right-click” run and you get a nice 100% passing green bar.  It is easy to refactor the code those tests are testing, because you know in your heart that no matter how much you change the code, those tests will always pass.  What a great feeling.  Doesn’t that make you feel like the king of the world?

I like to call those tests Ego Tests, because they are the equivalent of code flattery.  An Ego Test always validates you.  It tells you your code is pretty when it really has a fat butt.  The Ego Test says “you can do nothing wrong.”  The Ego Test says that “everything is going to be ok.”  That is until you roll the code out to production and everything breaks.  Then when you look for the Ego Test, you realize his message of “GREEN IS GOOD” is still blaring at full volume in an endless repeat cycle, like a mantra, or one of those self-hypnosis, “you can quit smoking” tapes.  He is not really your friend; he is just a recording stuck in an endless loop.  He won’t help you fix your bugs, he won’t even tell you they exist.

He must be stopped!

 The Ego Test   Test Cases That Just Wont Fail

Identifying the Ego Test

The Ego Test can be any kind of test from unit test to automated functional test and more.

The Ego Test is identifiable by one or more of the following characteristics:

  1. Does not contain asserts.
  2. Contains asserts which only assert on conditions which are always true such as:  assertTrue(isJohnSonmezCool?true:true);
  3. Asserts on things which are not related to what you are testing.
  4. Catches exceptions and eats them so the test will not error out.
  5. Tests things through a mock, but then does not assert the mocking expectations are satisfied.
  6. Never fails, no matter what you do to the code.

You would think from this list it would be pretty easy to stop and catch an Ego Test.  It is not always so easy.  Many Ego Tests appear to be testing the right thing, or seem like they could fail even though they can’t.

One quick way to figure out if you have an Ego Test is to change the input to the test and see if it fails.  If you think you are looking for the word “Blue” in a drop down, and you change the HTML being tested to not contain the word “Blue” in the drop down, and if the test still passes, you have a problem.

Avoiding the Ego Test

The Ego Test is fairly easy to avoid.  The best way is to make sure your test will fail when you change the input.  Most of the time you can do this by changing the source code temporarily to produce a result which should be a failure.  Sometimes though, you don’t have the ability to change the source code or doing so would be more work than it is worth.  In those cases, you can change the test instead, to test for a condition which should not be true.  Be very careful here though, many people are tripped up trying to validate tests the second way because they simply invert an assert statement.

For example, if you have an assert that is


and you try to verify the test does not always pass by inverting the assert to produce


Well, that is just plain dumb. But I have seen it before. Of course it will now fail.
What is valid is to change the assert to be something like:


When you know that “Green” should not be in the drop down. If the test still passes in that case, there is something wrong with the dropDownContains method, or some other assumption you are making about the contents of the data you are testing. Perhaps you are testing the wrong drop down control? Perhaps something else is wrong.

One more tip on avoiding the Ego Test: know what you are testing and test only one thing.  If you are writing some code to increase your code coverage and the best name you can come with for the test is the name of the method itself followed by the word “test”, you have no idea what you are actually testing.  100% code coverage is pretty easy to get in unit tests when you aren’t actually testing anything.

Mutation Testing

By the way, the official name of this kind of testing is called Mutation Testing.  I am not sure if it is completely necessary to do full mutation testing on your tests if you actually follow the advice I give to avoid the Ego Test in the first place, but there are actually some pretty cool mutation testing tools out there.  One I know of for Java is called Jester.  It will randomly change your source code and run all your unit tests.  Jester then reports which tests still passed after changing your source.  Looks like there is one for C# called Nester.  If you know of a good updated one in either language, be sure to let me know.

Disclaimer: This post doesn’t apply to Chuck Norris.  Chuck Norris’s tests look like ego tests in that they never fail, but they don’t fail because the code actually changes to conform to the perfect will of Chuck Norris as indicated in his tests, not because his tests are not testing the right thing.