# There Are Only Two Roles of Code

All code can be classified into two distinct roles; code that does work (algorithms) and code that coordinates work (coordinators).

The real complexity that gets introduced into a code bases is usually directly related to the creation of classes that group together both of these roles under one roof.

I’m guilty of it myself.  I would say that 90% of the code I have written does not nicely divide my classes into algorithms and coordinators.

## Defining things a bit more clearly

Before I dive into why we should be dividing our code into clear algorithmic or coordinating classes, I want to take a moment to better define what I mean by algorithms and coordinators.

Most of us are familiar with common algorithms in Computer Science like a Bubble Sort or a Binary Search, but what we don’t often realize is that all of our code that does something useful contains within it an algorithm.

What I mean by this is that there is a clear distinct set of instructions or steps by which some problem is solved or some work is done.  That set of steps does not require external dependencies, it works solely on data, just like a Bubble Sort does not care what it is sorting.

Take a moment to wrap your head around this.  I had to double check myself a couple of times to make sure this conclusion was right, because it is so profound.

It is profound because it means that all the code we write is essentially just as testable, as provable and potentially as dependency free as a common sorting algorithm if only we can find the way to express it so.

What is left over in our program (if we extract out the algorithms) is just glue.

Think of it like a computer.  Computer electronics have two roles: doing work and binding together the stuff that does the work.  If you take out the CPU, the memory and all the other components that actually do some sort of work, you’ll be left with coordinators.  Wires and busses that bind together the components in the system.

## Why dividing code into algorithms and coordinators is important.

So now that we understand that code could potentially be divided into two broad categories, the next question of course is why?  And can we even do it?

The biggest benefit to pulling algorithmic code into separate classes from any coordinating code is that it allows the algorithmic code to be free of dependencies.  (Practically all dependencies.)

Once you free this algorithmic code of dependencies you’ll find 3 things immediately happen to that code:

1. It becomes easier to unit test
2. It becomes more reusable
3. Its complexity is reduced

A long time ago before mocks were widely used and IoC containers were rarely used, TDD was hard.  It was really hard!

I remember when I was first standing on the street corners proclaiming that all code should be TDD with 100% code coverage.  I was thought pretty crazy at the time, because there really weren’t any mocking frameworks and no IoC containers, so if you wanted to write all your code using TDD approaches, you’d actually have to separate out your algorithms.  You’d have to write classes that had minimal dependencies if you wanted to be able to truly unit test them.

Then things got easier by getting harder.  Many developers started to realize that the reason why TDD was so hard was because in the real world we usually write code that has many dependencies.  The problem with dependencies is that we need a way to create fake versions of them.  The idea of mocking dependencies became so popular that entire architectures were based on the idea and IoC containers were brought forth.

We, as a development community, essentially swept the crumbs of difficult unit testing under the rug.  TDD and unit testing in general became ubiquitous with writing good code, but one of the most important values of TDD was left behind, the forced separation of algorithmic code from coordinating code.

TDD got easier, but only because we found a way to solve the problems of dependencies interfering with our class isolation by making it less painful to mock out and fake the dependencies rather than getting rid of them.

## There is a better way!

We can still fix this problem, but we have to make a concerted effort to do so.  The current path of least resistance is to just use an IoC container and write unit tests full of mocks that break every time you do all but the most trivial refactoring on a piece of code.

Let me show you a pretty simple example, but one that I think clearly illustrates how code can be refactored to remove dependencies and clearly separate out logic.

Take a look at this simplified calculator class:

``` public class Calculator
{
private List&lt;int&gt; history = new List&lt;int&gt;();
private int sessionNumber = 1;
private bool newSession;

public Calculator(IStorageService storageService)
{
this.storageService = storageService;
}

public int Add(int firstNumber, int secondNumber)
{
if(newSession)
{
sessionNumber++;
newSession = false;
}

var result = firstNumber + secondNumber;

return result;
}

public List&lt;int&gt; GetHistory()
{
if (storageService.IsServiceOnline())
return storageService.GetHistorySession(sessionNumber);

return new List&lt;int&gt;();
}

public int Done()
{
if (storageService.IsServiceOnline())
{
foreach(var result in history)
storageService.Store(result, sessionNumber);
}
newSession = true;
return sessionNumber;
}
}
```

This class does simple add calculations and stores the results in a storage service while keeping track of the adding session.

It’s not extremely complicated code, but it is more than just an algorithm.  The Calculator class here is requiring a dependency on a storage service.

But this code can be rewritten to extract out the logic into another calculator class that has no dependencies and a coordinator class that really has no logic.

``` public class Calculator_Mockless
{

public Calculator_Mockless()
{
this.storageService = new StorageService();
this.basicCalculator = new BasicCalculator();
}

public int Add(int firstNumber, int secondNumber)
{
}

public List&lt;int&gt; GetHistory()
{
return storageService.
GetHistorySession(basicCalculator.SessionNumber);
}

public void Done()
{
foreach(var result in basicCalculator.History)
storageService
.Store(result, basicCalculator.SessionNumber);

basicCalculator.Done();
}
}

public class BasicCalculator
{
private bool newSession;

public int SessionNumber { get; private set; }

public IList&lt;int&gt; History { get; private set; }

public BasicCalculator()
{
History = new List&lt;int&gt;();
SessionNumber = 1;
}
public int Add(int firstNumber, int secondNumber)
{
if (newSession)
{
SessionNumber++;
newSession = false;
}

var result = firstNumber + secondNumber;

return result; ;
}

public void Done()
{
newSession = true;
History.Clear();
}
}
```

Now you can see that the BasicCalculator class has no external dependencies and thus can be easily unit tested.  It is also much easier to tell what it is doing because it contains all of the real logic, while the Calculator class has now become just a coordinator, coordinating calls between the two classes.

This is of course a very basic example, but it was not contrived.  What I mean by this is that even though this example is very simple, I didn’t purposely create this code so that I could easily extract out the logic into an algorithm class.

I’ve found that if you focus on eliminating mocks or even just having the mindset that you will not use mocks in your code, you can produce code from the get go that clearly separates algorithm from coordination.

I’m still working on mastering this skill myself, because it is quite difficult to do, but I believe the rewards are very high for those that can do it.  In code where I have been able to separate out algorithm from coordination, I have seen much better designs that were more maintainable and easier to understand.

I’ll be talking about and showing some more ways to do this in my talk at the Warm Crocodile conference next year.

What slows down the development of software?

Think about this question for a bit.  Why is it that as most software evolves it gets harder and harder to add features and improve its structure?

Why is it that tasks that would have at one point been simple are now difficult and complex?

Why is it that teams that should be doing better over time seem to get worse?

Don’t feel bad if you don’t have an immediate answer to those questions.  Most software practitioners don’t.  They are hard questions after all.

If we knew all the answers, we wouldn’t really have these problems to begin with.

Regardless though, you’ll find many managers, business owners, customers and even software developers themselves looking for the answers to these questions, but often looking in the wrong place.

Process is almost always the first to be blamed. It stands to reason that a degradation of process or problems with the software development process are slowing things down.

Often there is some merit to this proposition, but I’ve found that it is often not the root cause. If your team is not sitting idle and the work that is important is being prioritized, chances are your process is not slowing you down.

Now don’t get me wrong here.  I am not saying that these are the only two important aspects to judge a software development process, but I am saying that if generally your team is working hard on important stuff most of the time, you can’t magically improve process to the point of increasing productivity to any considerable order of magnitude.  (In most cases.)

• Should we pair program or not pair program?
• Should we be using Scrum instead of Kanban?
• Should we be changing the way we define a backlog?
• Should we use t-shirt sizes or story points or make all backlogs the same size?
• Do we need more developers or more business analysts?
• Do we need to organize the team differently?

Now these are all great questions that every software project should constantly evaluate and ask themselves, but I’ve found over and over again that there is often a bigger problem staring us in the face that often gets ignored.

## The code!

Let’s do a little experiment.

Forget about process.  Forget about Scrum and backlogs and story points and everything else for a moment.

You are a developer.  You have a task to implement some feature in the code base.  No one else is around, there is no process, you just need to get this work done.

It might help to think about a feature you recently implemented or one that you are working on now.  The important thing with this experiment is that I want to take away all the other “stuff” that isn’t related directly to designing and implementing that feature in the code base.

You will likely come to one of these conclusions:

1. The feature is easy to implement, you can do it quickly and know where to go and what to modify.

Good!  That means you don’t really have a problem.

2. It is unclear what to do.  You aren’t sure exactly what you are supposed to implement and how it fits into the way the system will be used.

In this case, you may actually have somewhat of a process problem.  Your work needs to be more clearly defined before you begin on it.  It may be that you just need to ask more questions.  It may be that half baked ideas are ending up in your pipeline and someone needs to do a bit more thinking and legwork, before asking a developer to work on them.

3. Its hard to change the code.  You’ve got to really dig into multiple areas and ask many questions about how things are working or are intended to work before you can make any changes.

This is the most likely case.  Actually usually a combination of 2 and 3.  And they both share a common problem—the code and system do not have a design or have departed from that design.

I find time and time again with most software systems experiencing a slow down in feature development turnaround that the problem is the code itself and the system has lost touch with its original design.

You only find this problem in successful companies though, because…

## Sometimes you need to run with your shoelaces untied

I’ve consulted for several startups that eventually failed.  There was one thing in common with those startups and many other startups in general—they had a well maintained and cared for codebase.

I’ve seen the best designs and best code in failed startups.

This seems a bit contradictory, I know, but let me explain.

The problem is that often these startups with pristine and well maintained code don’t make it to market fast enough.  They are basically making sure their shoes laces are nicely tied as they stroll down the block carefully judging each step before it is taken.

What happens is they have the best designed and most maintainable product, but it either doesn’t get out there fast enough and the competition comes in with some VB6 app that two caffeine fueled not-really-programmers-but-I-learned-a-bit-of-code developers wrote overnight or they don’t actually build what the customer wants, because they don’t iterate quick enough.

Now am I saying that you need to write crap code with no design and ship it or you will fail?

Am I saying that you can’t start a company with good software development practices and a clean well maintainable codebase and succeed?

No, but what I am saying is that a majority of companies that are successful are the ones that put the focus on customers and getting the product out there first and software second.

In other words if you look at 10 successful companies over 5 years old and look at their codebase, 9 of them might have some pretty crappy or non-existent architecture and a system that departed pretty far from the original design.

Ok, so where am I driving at with all this?

Time for an analogy.

So these companies that are winning and surviving past year 5, they are usually running.  They are running fast, but in the process of running their shoelaces come untied.

They might not even notice the shoelaces are untied until the first few times they step on one and trip.  Regardless they keep running.  And to some degree, this is good, this is what makes them succeed when some of their failed competitors do take the time to tie their shoelaces, but those competitors end up getting far behind in the race.

The problem comes pretty close to after that 5 year mark, when they want to take things to the next level.  All this time they have been running with those shoelaces untied and they have learned to do this kind of wobble run where they occasionally trip on a shoe lace, but they try to keep their legs far enough apart to not actually step on a shoelace.

It slows them down a bit, but they are still running.  Still adding those features fast and furious.

After some time though, their pants start to fall down.  They don’t really have time to stop running and pull up those pants, so as they are running those pants slip further down.

Now they are really running funny.  At this point they are putting forth the effort of running, but the shoelaces and pants are just too much, they are moving quite slow.  An old woman with ankle weights power walks past them, but they can’t stop now to tie the shoelaces and pull up those pants, because they have to make up for the time they lost earlier when the pants first fell down.

At this point they start looking for ways to fix the problem without slowing down and pulling up the pants.  At this point they try running different ways.  They try skipping.  Someone gets the idea that they need more legs.

I think you get the idea.

What they really need to do at this point though is…

Hopefully you’ve figured out that this analogy is what happens to a mature system’s code base and overall architecture.

Over time when you are running so fast, your system ends up getting its shoelaces undone, which slows you down a little.  Soon, your system’s pants start to fall down and then you really start to slow down.

It gets worse and worse until you are moving so slow you are actually moving backwards.

Unfortunately, I don’t have a magic answer.  If you’ve gotten the artificial speed boost you can gain from neglecting overall system design and architecture, you have to pay the piper and redesign that system and refactor it back into an architecture.

This might be a complete rewrite, it might be a concerted effort to get things back on track.  But, regardless it is going to require you to stop running.  (Have you ever tried to tie your shoelaces while running?)

Don’t feel bad, you didn’t do anything wrong.  You survived where others who were too careful failed.  Just don’t ignore the fact that your pants are at your ankles and you are tripping over every step, do something about it!

# The Best Way to Unit Test in Android: Part 1

I’ve been doing some development in Android lately on a top secret project, one that hopefully will change the way you run with your phone.

In the course of building this app, in a previous post I mentioned that I wanted to find the right, or perfect way, to build an Android application.

I haven’t found the best way to build an Android application into a nice clean design pattern, but have found a way that seems to work, and makes the application testable and easy to maintain.

I do believe though, that I have found the optimal way to unit test in Android right now.  Yes, a bold statement, but I have looked high and low for a better solution, and can’t find one.

## The problems

So first a little bit of background on the problem with unit testing in Android.  Take what I say with a grain of salt here, because I am not an Android platform expert, and feel free to correct me if I misstate something.

#### Android.jar

If you download the Android SDK from google, you will find that the android.jar you get with the SDK is much like a C++ header file; it doesn’t actually contain any working code.  As a matter of fact, all the methods are stubbed out to throw an exception with the message “Stub!” when you call them.  How cute.

The real android.jar implementations live on the emulator, or your real Android device.  So, if you want to actually run any code that is going to call any methods or try to construct real Android framework classes, you must run that code inside the emulator or a real device.

#### Dalvik VM

When you’re working with Android, it sure feels like you are writing regular old standard Java SE code, but you are not. You probably won’t notice the difference, as almost everything that you need is there in the Dalvik VM implementation of Java.

Dalvik is not even a Java Virtual Machine at all.  That is right, it runs on cross-complied .class files which are converted to a .dex format.  Yes, it does not use java byte code.

Why do I mention all this?  Because you might want to use something like JMock to mock your dependencies when writing your unit tests.

Well, you can’t.  It just isn’t going to work, because the Dalvik VM doesn’t use java byte code, so the reflective coolness that mocking frameworks like JMock use doesn’t work the same.

Be aware that any external library you try to use in your Android project may not work because Android does not support the full Java SE implementation.  It actually is a subset of the Apache Harmony Java implementation.

#### There is no main()

Where does an Android application start running?  From the activity.  Which is basically the view.  (Some people will argue this is the controller or presenter.  And yes, I agree in respect to the Android framework it is, but in regard to your application framework it is the view.)

Android applications define a starting activity and launch from there.  Activities can even be launched from other applications.

This tends to disrupt most of our MVC, MVP, MVVP pattern architectures, as the view is going to be the entry point and will have to be responsible for initializing the rest of the application.  (That is not entirely true, as there is an android.app.Application class that gets called the first time your app is run, to launch the main activity, but for the most part you can’t do much here.)

Essentially though, you have to build your architecture based on each Activity being its own separate little application, with the entry point being in the activity.  This puts some serious constraints on unit testing.

## Putting it all together

So if I can sum up the problems briefly, I would say:

• Android framework code has to run in the emulator or on a device.
• Dalvik VM doesn’t allow us to use our standard mocking frameworks.
• Entry points for our applications are in the views.

The first problem, combined with the second, lead us to an interesting choice.  Should we run our unit tests on an actual emulator or device using the Dalvik VM, or should we run them in a JVM?

It is probably not an obvious question, but let me explain why it is the most relevant.

In writing an application, we are going to have application logic that has nothing specifically to do with the Android platform, and we are going to have Android platform specific logic (drawing views, handling Android OS events, interacting with Android APIs etc.)

If we want to write true unit tests, we need to isolate our classes and test them individually.  We should be able to do this for our application logic, without relying on the Android framework.  If we don’t rely on the Android framework, we don’t need to run on a real or emulated device, thus we are not constrained to the Dalvik VM.

If we choose to run our unit test code on a real or emulated device:

• We will be able to use the Android framework APIs in our testing efforts.  For example, we can create new location objects instead of mocking them up.
• We will be completely true to the real execution of our code since we will be using the real VM the code will run on.
• Since we are running our tests on a device or emulator, they will run much slower.
• We won’t be able to use JMock, EasyMock, or Mockito, we’ll either have to roll our own mocks or use a fledgling Android mocking framework.

If we chose to run our unit test code in a JVM on our PC:

• We will have the full power of the JVM available to our test code, so we can use mocking frameworks like JMock, and BDD frameworks like Instinct.
• We will run our unit tests much faster, since they will be using our PC instead of a device.
• We can use standard unit testing practices and not have to inherit from special Android classes, or use special Android test runners.
• We will have to wrap any calls to the actual Android framework if we need to use any Android classes or services deeper down in our application.
• We have a small risk of having different behavior between running the tests and the real application because we will be running the code on different VMs.

In my next post, I’ll detail which option I chose and why and also give some detailed steps of how to get setup and running.

# The Hardest Thing I Struggle With

I ran up against it again as I was trying to figure out the “right” way to build an android application.

Some of your coworkers probably don’t struggle with the issue because they really just don’t think about it that much.

But, if you are reading this blog, you probably have encountered the problem I am about to talk about.  It may be for you, like it is for me, the single greatest thing holding you back.

## The struggle with perfection

We want to build perfect software, we want to build perfect code, but it is just not possible.

Like Tyler Duren’s alter ego, we want to put everything in a nice little box.

I had mentioned earlier that not all software developers struggle with this problem.  I think it arises when you start to actively seek to improve your development skills.  It is natural to look for the “right” way to do something you want to get better at.

• There is a right way to swing a baseball bat.
• There is a right way to do mathematical calculations.
• You can play a piece of music perfectly on a piano.

But there isn’t a right way to build software.

There are lots of wrong ways, and then there are many better ways that all have trade-offs against each other.

It is vey hard to come to grips with this reality.  At least for me it is.  I want to know how I am supposed to do it and I don’t want to hear “any way you like”, or, “however seems right to you.”

## Software development is part craft

There is some science to it.  Don’t get me wrong.

The biggest problem in software development is not people struggling with perfection, but rather developers believing that there are no wrong ways to develop software.

You can learn what is right to some degree, up to a point.  But, after you steer away from the obviously wrong, you end up drifting into the true craft of software development.

When you only have one tool, software development is easy.  You just hack on things with your one tool.  You might not be very effective or very efficient, and you may make a pretty big mess, but you generally can get things done and you know what to do.

When you have a toolbox full of tools, the world stops being so black and white.  Software development truly becomes an art or craft at this point, as you are forced to make trade-offs and choose architectures and technologies based on experience and intuition combined.

Our minds fight against this concept of unrule.  It is like playing monopoly with generalizations or ideas of how the game should be played, but no explicit rules.  Sometimes life is just easier within preset boundaries that clearly tell us what is right and what is wrong.

It is a strange twist of fate that the act of building something that is absolutely structured and governed by rules is such a rule-less and judgment based pursuit.

## Dealing with perfection in an imperfect world

So, why is striving for perfection bad anyway?

Well, it can be a major roadblock that prevents us from getting things done.

I’ll often find myself at the 90% better than I started solution and pushing to get to 95%.  That push to 95% can take the same amount of time it took to get to 90%.

Sometimes when we are looking for the perfect architecture or trying to apply patterns because we believe they are right, we end up making things more complicated than they need to be, or we miss a better “less perfect” solution, because we have deemed it so.

One thing I have tried to do to curb my insatiable desire for perfection is to strive to always improve rather than for perfection itself.

It is important for us to recognize when we are at that 90% mark and move on.

Next time we encounter a similar situation we’ll hit the 91% mark because we will have more experience and will have built better intuition.

Here are some tips and strategies I have picked up for dealing with the problem of perfection.

• Try to get rid of the all or nothing mentality.  Don’t do things just good enough to get by, but don’t try to do them perfectly either.  Do an excellent job and know what that is.
• Start working on things in a rough draft form.  Fill in details later.  This especially helpful with web pages or anything that requires design work.
• Don’t get stuck on a problem that is mainly just polish.  If the elegant way to do something is causing you 10 hours of debugging but you can do it in a less elegant way and hide the mess in a nice package, opt for the second.  You can always come back later, when you don’t feel pressured to find a solution.
• Get a second opinion.  If you are struggling with a design issue and going back and forth in your head about the best way to do something, ask someone else and it might make one way or another perfectly clear.
• Make yourself a research note.  Move on for now, and make a note to come back later or to research a technology or design.

# Should I Leave That Helper Class?

The project I am working on is riddled with “helper” classes.  What is a helper class?

Good question.  I don’t really know.  Neither does the helper class.

When you ask the helper class, what do you do… he half smiles, looks down at his over-sized feet and replies with a squirrely “stuff”.

## How to identify helper classes

There are a few common attributes we can look at that will tell us if we are dealing with a helper class, in no particular order:

• Doesn’t have a clear responsibility of any kind.
• Doesn’t hold any of its own state data.
• Has mostly or all static methods.
• Class name ends in helper.  (This is a good tip off!)
• If it does get newed up somewhere, it gets passed all around afterwards.
• Lives in a package or namespace called “utilities”.

A helper class is a class that contains auxiliary methods for other classes, but isn’t really a thing in and of itself.  A helper class is the opposite of object oriented programming. I wrote about the dangers of static methods before, and helper classes usually are the result of proliferation and breeding of static methods.

We are going to skip going any further into why they are bad and go straight into the burning question…  When you see one of these in your code base…

## Should you just leave it there?

(The above picture means “No”)

When you see a car accident on the freeway that no one has reported, should you just drive on and not dial 911?

When you see an old woman being beaten on the street, should you walk right on by?

When you open your fridge, and you open the vegetable drawer and you see rotting cucumber mush in a bag, do you just forget you ever saw it?

I’m not suggesting you should start diving into your legacy code base and start removing all the helper methods right now.  But what I am saying is that if you are working inside of a helper method to change some functionality and you think it is ok to just add one more method using some lame excuse like “it’s the convention,” I’d like to take a big boat paddle and teach you some single responsibility.  Don’t be part of the problem.  Be part of the solution.

Here are some lame excuses for leaving helper classes and propagating them:

• I am just making a small change to the code.
• I don’t want to break this stuff that is already working.
• I am just following the convention of the architecture.
• I don’t understand how it works.
• There is no class this functionality belongs to.
• I’m a lazy bastard and I don’t care about making the world a better place.
• The world is going to end in 2012 anyway.

If you’re using one of these lame excuses… STOP IT!  3000 line helper classes weren’t born overnight.  Some idiot first created the class, then more idiots added methods to it.  Don’t be just another idiot.  I implore you.  We have enough.

## John, I want to do the right thing… help me.

What?  You do?  I’ll assume you are being sincere… even though I have my reservations.

First take this oath.  Place your hand on The Art of Computer Programming and repeat after me.

I, <your name>, solemnly swear to not propagate the aberration or pure evil and generally sucky code known as the helper class.

I promise to uphold the values of single responsibility, data abstraction, and the open closed principle.

I will vanquish helper classes, and helper methods and properly put them in associated classes where they belong, under no less penalty than having my arms and legs removed with a butter knife.

Welcome initiates, in my next post I’ll tell you some techniques I use to eliminate helper classes.

# When Scrum Hurts: Mob Architecture

## If you have been following my blog, you know that I have a love/hate relationship with Scrum.

I’ve previously talked about why I think Scrum will eventually die and I am still pretty much convinced of that point.  Scrum has become something you sell through training and consulting.  If you make your living off of doing this, sorry, but you may be part of the problem.

What this post is really about though is the problem of good architecture when implementing Scrum.  In my experience, it is very difficult to create or maintain a good architecture and do Scrum.  There is one very simple reason for this: mobs don’t build good architectures.

## Why?

Let me give you an example that helps to illustrate my point.  Let us take a second to think about real physical engineering and architecture.  Let us say we are going to put together a team to design and build a custom home.

So we get together a plumber, an electrician, a couple of framers and an architect.  Now, let’s have them start building the house.  What do you think the architecture of the building will be like?  What if the plumber and electrician know a good amount about architecture because they studied it in high school?  They outvote the actual architect.  In general the team is going to benefit from the real architect’s experience and guidance, but when he understands a critical component which the other team members do not see, he is going to be overridden and that will spell trouble down the line.

Now, obviously this parallel does not completely apply.  I am just trying to take one aspect of it for the point of this illustration: you don’t want a group of people, as intelligent as they are, to make a decision which could be better left in the hands of an expert.

At this point you might be thinking “what an arrogant jerk!”  You think a so called “software architect” knows so much better than the average developer?  No, that is not exactly the point.  The point is that there is a difference in level of experience and ability in software people, roles and labels aside, and when you use a democracy of team based decision-making methods, you get an average of the skill level and experience of the whole team as a result.  It is a mouthful, but read that over a few times until you get my point.  I think it is pretty hard to argue against, but let me give one more illustration.

Let us say now that you are going into a hospital to get heart surgery done.  Now, this kind of procedure is not a one man operation.  You would typically have a surgeon, an anesthesiologist, several nurses, and other doctors involved.  But let’s say for this instance that we let this surgical team operate like a Scrum team.  Instead of the surgeon or chief medical officer ultimately calling the shots, the team will make a decision as a whole.  Would you be ok with that?  The nurse has the same vote as the surgeon?  Two nurses can override the surgeon’s decision?  I think I would be a little bit alarmed, especially if I sat in on their design session.

I’m not trying to pick on anyone here or devalue anyone.  I am also not trying to destroy the concept of team.  Teams and teamwork are very important in the development of software.  But I hope you can see the point that Scrum can tend to lean towards a mob built architecture for a system, and that architecture is only as good as the average of the abilities of the team members.  Although more often than not it’s really just as good as the most vocal and assertive member(s) of the team.

## Where Scrum and Scrum-like processes fail

I don’t see how a resolution to this problem fits inside of the Scrum framework, and that is a problem.  The idea of a completely self-managing team is ok for making construction type decisions about building the software, but it has no solution for the overall architecture and general best practices for the development of the application.  As much as we can despise hierarchy, it really has a value that is completely missed by Scrum.  You really want to have the more senior highly skilled developers technical people with more power over decisions and direction than your less skilled.  This isn’t mean, it isn’t spiteful or power-mongering, it is common sense.  The problem with the self-managing-everyone-is-equal team is that it levels the field.

## So what is the solution?

Would I offer up a problem without offering up a solution?  There are several ways of dealing with this problem.  It depends on how far you are willing to step outside of Scrum.

Solution 1: Scrumminess Factor: 9

Appoint a small team of technical and business architects that have the responsibility of:

• Overseeing the general architecture of the system
• Creating development best practices and guidelines
• Attending design sessions for other teams
• Stepping in when needed to steer a team back on the right course

This solution works only if you have several Scrum teams on the project, where it would make sense to have a team dedicated to architecture.  This team is also a good one to be creating developer tools.  I have actually been part of a team doing this kind of role, and I think it worked out pretty well.  It doesn’t really violate Scrum because that team is a separate Scrum team with a different kind of backlog.

Solution 2: Scrumminess Factor: 6

Appoint a technical architect to the project.  This person is in charge of the technical people on all of the teams for technical direction, but not HR duties.  This role would have the ultimate authority on any kind of development and architecture decisions for the project.  They would be a floating resource that could help teams at times where needed.  This person would be thinking about the bigger architecture picture that is being created by each of the teams.

Solution 3: Scrumminess Factor: 3

Appoint technical leads on the teams which are responsible for the architecture and ultimate technical direction of the team they are on.  If you have multiple teams the technical leads should have a technical lead for the technical leads.  This allows for a unified vision when there is dissent among the technical team leads.  It has a low Scrum factor because it puts a direct leadership role on the Scrum team, but it allows for the solution to the mob architecture problem, while still keeping the architecture within the team.

One final word here.  If you are still thinking that a central authority is not important to a business, consider this:  every company I know of has either a CEO or a president.  I have never seen a company with a Chief Executive Committee.  Sure, there is a board of directors, whom the CEO ultimately is accountable to, but you have one person setting the vision and business direction of the company.