# Not Everything Is 80-20, Don’t Blindly Follow Pareto’s Law

There is a useful observation about the world that is often applied to software development called the Pareto principle or Pareto’s law.

This principle suggests that in many situations 80% of the results come from 20% of the causes.

For example, Pareto had realized that 80% of the land in Italy, during his time, was owned by 20% of the population.

Since then, many people, including Pareto himself, have applied the same observation of this 80-20 rule to many other areas of life from economics, to business and even software development.

But, I’ve found that we rely on Pareto’s law just a little bit too much.

## The problem with generalizations

The biggest problem I have with Pareto’s law is that is applied way too often to too many situations. In many cases, especially in software development, Pareto’s law becomes a self-fulfilling prophecy—the more you look for Pareto’s law, the more you magically seem to find it.

None of this is to say that Pareto’s law isn’t a real thing—of course it is. If you go and take a look at real hard numbers about distributions of things, you’ll find Pareto’s law all over the place in your statistical data. But, at the same time, if you go and look for the number 13, you’ll find an alarming number of occurrences of that two digit number above all others as well.

It is very tempting to force things that don’t quite fit generalizations into those generalizations. How often do we use the phrases “always” and “never?” How often do we fudge the data just a little bit so that it fits into that nice 80-20 distribution? 82% is close enough to 80 right? And of course 17.5% is close enough to just call it 20 after all.

Not only can you take just about any piece of data and make it fit into Pareto’s law by changing what you are measuring a little bit and fudging the numbers just a little if they are close enough, but you can also take just about any problem domain and easily, unconsciously, find the data points which will fit nicely into Pareto’s law. There is a good chance you are doing this—we all are. I do it myself all the time, but most of the time I am not aware of it.

I’ve found myself spouting off generalizations about data involving Pareto’s law without really having enough evidence to back up what I am saying. It is really easy to assume that some data will fit into Pareto’s law, because deep down inside I know I can make it fit if I have to.

## Seeing the world through Pareto colored glasses

You might think there is no harm in fudging numbers a bit and finding more and more places to apply Pareto’s law. But, looking at the world and blindly assuming all data falls into a distribution of 20 percent of the causes being responsible for 80 percent of the effects, is like walking around with blinders on your eyes—you are only seeing part of reality and even the reality you are seeing tends to be a bit distorted.

Again, this doesn’t mean that Pareto’s law isn’t correct a large amount of the time, but it means that when you are just assuming that any data that appears to obey this law will, or worse yet, that all data MUST obey this law, you are severely limiting your perspective and restricting your options to those that already fit your preconceived ideals.

Sometimes I wish I had never heard of Pareto’s law, so that I wouldn’t be subject to this bias.

Let me give you a bit of a more concrete example.

Suppose you blindly assume that 80% of your application’s performance bottleneck comes from 20% of your code. In that case, you might be right, but you might also be wrong. It is entirely feasible that there are some parts of your code that contribute more or less to the performance of the application. It is also pretty likely that there are some bottlenecks or portions of code that heavily impact the performance of your application. But, if you go in with the assumption that the ratio is 80-20, you may spend an inordinate amount of time looking for a magical 20% that doesn’t exist instead of applying a more practical method of looking for what the actual performance problems are and then fixing them in order of impact.

The same applies for bugs or testing. If we blindly assume that 20% of the code generates 80% of the bugs, or that 20% of our tests test 80% of our system, we are making pretty large conclusions about how our software works that may or may not be correct. What happens when you fix all the bugs caused by the 20% of code that generates 80% of them? Does a new section of code now magically produce 80% of the bugs? If 20% of your test cases test 80% of your code, can’t you just create those ones? Why create another 80% to only test another 20%? And if you did follow that advice, then wouldn’t you have the situation where 100% of your tests tested 80% of your code?

The problem is when you start applying and assuming that Pareto’s law applies blindly, you start making all kinds of incorrect assumptions about data and only see what you expect.

## So, was Pareto wrong?

In short, no. He wasn’t wrong. Pareto’s principle is a thing. In general, in many cases, it is useful to observe that a small amount of causes are responsible for a majority of effects.

But, it is not useful to apply this pattern everywhere you can. The observation of the data should guide the conclusion and not the other way around.

I find it more useful, especially in software development, to ask the question “is it possible to find a small thing that will have a great effect?”

A good book on this very subject is The 4 Hour Chef. Although I don’t always agree with Tim Ferris, he is definitely the master of doing more with less and talks frequently about concepts like minimum effective dosages.

In other words, given a particular situation, can I find a small thing I can do, or change I can make, that will give me the biggest bang for my buck?

Sometimes the answer is actually “no.” Sometimes, no matter how hard we try, we just can’t find a minority that influences the majority. Sometimes the bugs are truly evenly distributed throughout the system. Sometimes the contributions of team members are fairly equal. One team member is not always responsible for 80% of the results.

And let’s not forget about synergy. Which basically is when 1 + 1 is equal to 3 or more. Sometimes the combination of things together makes the whole and separating out the parts at all greatly reduces the function.

For example: eggs, sugar, flour and butter can be used to make cake, and you could say that 80% of the “cakiness” comes from 20% of the ingredients, but if you leave one of those ingredients out, you’ll quickly find that 100% of them are necessary and it doesn’t even make sense to try and figure out which ingredient is most important, because alone each ingredient functions much differently than they do all together.

In software development this is especially true. Often in complex systems all kinds of interactions between different components of a system combine together to create a particular performance profile or bug distribution. Hunting for the magical 20% in many cases is as futile as saying eggs are responsible for 80% of the “cakiness” of cake.

# You’re Only a Beginner Once

I was reading an interesting study last week about how willpower seems to grow like a muscle.  In the study they had found that subjects that had successfully stuck to a diet program performed better in many other areas of their life as well.

This study seemed to indicate that by having success in one area of life requiring willpower that a person would gain benefits in other willpower related areas of life such as working out, controlling temper, studying and/or working.

Basically as people trained their willpower it grew in capacity.

This study got me thinking about how all kinds of seemingly unrelated skills tend to aid us in tasks that don’t directly use them.

## Digging a little deeper

So what do I mean by this?

I mean that if you learn C#, you will be able to learn Java faster.  Essentially you’ll never have to be a beginner at Java.

Now that might not be any big revelation to you.  The languages are pretty close already in syntax, but I have found this principal extends much further than that.

In the years that I have been doing software development I’ve had the opportunity to work with many developers who started out their careers in totally unrelated fields.

I found that many of those developers who had significant experience in another field, but then switched to software development, very quickly acquired the skills required to become successful in software development.

I found that within about 3 years, many of those developers had the skill and knowledge equivalence of a developer that might have been in the field for 10 or more years.

## Everything is the same

I’ve always been surprised by this phenomenon, but never really thought about why this was true.

I’ve worked with many different programming languages, technologies and platforms and I’ve made a pretty good study of other fields like real estate investment and options trading.  I’m constantly finding skills and knowledge I acquired in one area of interest are boosting my abilities in other areas and I finally think I know why.

There really isn’t that much variation in the very basic principles of reality.  Essentially everything fits into a handful of molds at a very fundamental level.

The same kind of basic principles that define the pricing of options contracts on securities like stocks are the same basic principles that define the trade-off between time, scope and quality in software development.

At a deeper even more fundamental level you could say that a person that has learned how to work within and understand the relationships between constraints will find that skill is unconsciously applied to a thousand other areas of life which also have defined and real constraints.

Of course in software development itself we recognize many of these similarities as patterns.  What most developers don’t realize though is that patterns are natural emergences of ways to solve problems that organically occur in some form or other.  The book on patterns just formalizes these patterns.

If you’ve ever heard the term Polymath before (basically a master of multiple skills or areas of study), this tends to explain why Polymaths like Leonardo da Vinci and Michelangelo were able to accomplish so much in so many areas.

## Knowing what you don’t know

Another major reason why you are only a beginner once is because once you’ve been a beginner you have a better idea of what you don’t know.

When you start out as a beginner in something one of the biggest hurdles to success is finding out what you need to learn.  (Which is why I often recommend starting off by scoping a subject.)

If you were just starting out in programming and had no prior experience in with any type of related skill, you wouldn’t know to ask what kind of looping structures are available in C#, because you wouldn’t even know to ask that question.

On the other hand, if you have experience in just one programming language, you will have a whole array of questions which you can ask about that language, because you can relate it to concepts you already understand.  Often when I teach, I try to do exactly that.  I try to find something that I think you are already familiar with and relate the new concept to a well understood domain.

If you’ve learned quite a few different technologies and programming platforms, when you try to learn a new one, you’ll know what you don’t know and that will make the whole learning process much quicker.

## What this means to you and me

As software developers this is great news, because the world of technology just keeps getting bigger and bigger.

It is very difficult to keep up with all the different technologies that are constantly coming out every year—it is an almost impossible task.

But fortunately we can apply this principle to our craft and realize that skills we acquire in one area of software development will help us to never have to be a beginner in other technologies and development platforms.

The key to unlocking this potential is twofold.

1. Push through the surface to see the similarities.  Often starting out with a new technology everything seems new, but I’ve found that if you don’t give up, and you push a little further you end up in familiar territory.
2. Constantly make the shift between technologies in order to maximize the benefit.  I’ve also found that shifting between technologies and even development languages tends to help us to unlock the ability to see things at a more fundamental level.  Think about the inductive reasoning where you might start out with 1 instance of a thing, then 2, then 3, then you generalize to n.

This is why it is so important to learn how to learn.  The more you learn, the easier to becomes to learn and the more synergistic the skills you acquire become.

So if you’ve been afraid to dive into a new technology because you are either afraid that you won’t be able to learn it quickly enough or that it will be a waste of time because it is unrelated to technologies you actually use, don’t be.  Instead try to remember that even though something new might be intimidating at first, you’ll most likely have a head start, because you are only a beginner once.

# Getting Up to BAT: Designing an Automation Framework

Now that you’ve gotten an automation lead and decided on the browser automation tool you are going to use, the next step is to design an actual automation framework.

This is one of the most critical components of the overall success of your automation strategy, so you will want to make sure you invest properly in this area.

I have seen a large number of automation projects go forth, but each time the critical component determining their success or failure was having a good automation framework.

Solid design and excellent technical resources are absolutely critical to success in this area!

## What is an automation framework?

An automation framework is essentially an API that all of your BATs (Blackbox Automated Tests) are written against.

This API can be as simple as the API that is exposed by your browser automation tool (WatiN, Selenium, etc.), but I would highly recommend building your own layer on top of the browser automation tool’s API that will act as a DSL (Domain Specific Language) for automating your application.

Let me break this down a bit further by using an example.

There are several “API” levels we can interact with.

1. We can go with a very low level API where we take whole coffee beans and grind them down.  Then we take some water, get it hot.  We take our filter put the ground beans in it, put it over a cup and pour the water into the filter.
2. We can go with a higher level API where we use a traditional coffee maker.  In this case we load the coffee maker with a filter, ground coffee beans and water and push a “brew” button.  We could also set it to start at a certain time.
3. We can go with a very high level API where we use a Keurig machine or similar device.  In this case we only make sure the machine has water in it, and we just insert a little pod and press brew.  We can make different kinds of coffee, cider or hot cocoa just by changing what pod we use.

Using the API provided by the browser driver is like making the coffee by hand.  It’s going to take a large amount of effort each time you do it.

We want our automation framework to be more like the Keurig machine.  We want to be able to compartmentalize our tests in little pods that are small and don’t require many hooks into the automation framework.

To rehash this one more time, basically our automation framework will be a framework we build on top of the browser driver framework, which is designed to make it easy to write tests which automate our application.

## What makes a good automation framework design?

The true measure of an automation framework is the size of the tests that are written against them. The less lines of code in each test, the better the automation framework captures the domain of the application being automated.

In my earlier post about an example of an automation framework, I talked a bit about the strategy I used in a real implementation of an automation framework.

Here is a diagram showing the different layers.  You can see the framework layer is in green here.

You can also see in this diagram that on the right hand side, I have screens, workflows and navigation.

This is one of the common design patterns you can use to build your automation framework, since it closely models that of most web applications.

A good way to design your framework is to use a pattern like this and create classes for each page in your application, create classes which might represent workflows in your application, and create some classes for navigating around the application.

The basic idea you are shooting for with your automation framework is to make it so the tests do not have to know anything about the browser.  Only your automation framework itself should be dealing with the DOM and HTML and CSS.  Your tests should be dealing with concepts which any user would be familiar with.  That is why I commonly use pages and workflows.

Let me give you an example.

Let’s say we ignored the advice of creating an automation framework and decided to program our tests directly against the browser driver layer.  In this case, let’s say that we want to automate a login page.  Our test might look something like this pseudo-code.

```browser.goto(&quot;http://mywebsite/login.aspx&quot;);

```

This is not very readable.  It is not something an average tester will be able to pick up and start writing, and it is extremely fragile.  If you have 500 tests like this and the id changes for one of the elements on the page, many tests will break.

Now, contrast it to this pseudo-code:

```Pages.Login.Goto();

```

Where did all the code go?  You don’t need it!  Actually it went into the framework, where it can be reused.  Now your test code is simple, can be understood by just about anyone, and won’t break with changes to the UI.  (You will just have to change the framework method instead of 500 tests.)

If you want some more detailed examples take a look at my Boise Code Camp slides on the subject here.

Let me offer up some general guidelines for creating a good design for an automation framework.

1. NEVER require the tests to declare variables.
2. NEVER require the tests to use the new keyword or create new objects.
3. NEVER require the tests to manage state on their own.
4. ALWAYS reduce the number of parameters for API calls when possible.
5. ALWAYS use default values instead of requiring a parameter when possible.
6. PREFER to make the API easier to use over making the internals of the API less complex.
7. PREFER using enumerations and constants to requiring the test to pass in primitive types.  (Make the input bounded when possible.)
8. NEVER expose the browser or DOM to the tests or let them manipulate it directly.

You can see that these guidelines will almost force your API to be entirely static methods!  Don’t freak out.  It is perfectly fine.  There is nothing wrong with using static methods in this case.

Stop… take a breath, think about why you think static methods are bad, because I probably agree with you.

But, in this case we are HIGHLY valuing making the tests simple and easy to write over anything else, and because of this emphasis, static methods are going to be the way to go.

## Some advice on getting started

Take those tests and make them into code preserving as much English as possible.  Then work down from there implementing the methods needed to make those tests work.  Only add things as you need them.  Do not overdesign!

In my example above, we might have started with something like this:

2. Login with the default user.

The actual code looks just like that.  It should always be a 1 to 1 mapping.

This is where having someone that has some experience doing this is going to come in handy.

Just keep in mind the whole time your 2 big goals:

1. Easy to write tests
2. Tests are concise

# Basic to Basics: Understanding IoC Part 2 (Creation)

In my last back to basics post we talked about what inversion of control (IoC) is in regards to inverting control of interfaces.

We looked at how we can benefit from changing the control of the interface from the service to the client of that service.

This time we are going to tackle the more common form of IoC that is quite popular these days, and I’m going to show you why dependency injection is only one way to invert the control of the creation of objects in our code.

## What is creation inversion?

In a normal everyday code we usually create objects by doing something like:

MyClass myObject = new MyClass();

We usually do this creation of objects inside of the class that is going to use or needs that object.

If we used interface inversion, we would probably end up with a declaration like:

IMyInterface myImplementation = new MyImplementation();

You can see that even though the interface may have been inverted we are still controlling the creation of the object from inside the class that uses the object.

Let’s withhold our judgment of whether this is a bad thing or not until we fully understand the idea of creation inversion and the problem it is trying to solve.

So we can see clearly that the normal control is that objects that are used by a class are created by the class using the object.

If we wanted to invert this control, we would say that objects that are used by a class are created outside of the class using the object.

There, you understand IoC in regards to creation.  If you create the object you are going to use inside your class, you are not inverting control.  If you create the object somewhere else, you are inverting control.

Why would you want to invert control of the creation of your objects?

If you are familiar with design patterns, factory pattern should have just popped into your head when you read that statement.  If you are not familiar with design patterns, let me suggest either reading the original gang of four Design Patterns book, or the highly recommended Head First Design Patterns.

I mention the factory pattern because you might want to implement inversion of control for the same types of reasons why you would want to use the factory pattern.

Shh… I’ll tell you a little secret if you promise not to tell the cool IoC container / DI kids.  The factory pattern is inversion of control!

Let’s use factory pattern to tell us why we would want to invert control.  The description of factory pattern is:

Centralize creation of an object of a specific type choosing one of several implementations.

From that description we can deduce that we might want to use the factory pattern or some other form of inversion of control anytime we have an object or interface that has several different implementations and we want to put the logic of choosing which implementation to use in a single location.

Let’s use the classic example of having a skinnable UI.

If we have a UI framework that includes classes like Button, DropDownList, TextBox, etc, and we want to make it so that we can have differently skinned implementations of each UI control, we don’t want to put a bunch of code into all of our screens in our application like:

```Button button;
switch(UserSettings.UserSkinType)
{
case UserSkinTypes.Normal:
button = new Button();
break;
case UserSkinTypes.Fancy:
button = new FancyButton();
break;
case UserSkinTypes.Gothic:
button = new VampireButton();
}
```

If we can somehow move the creation of the right kind of object outside of our classes and put it in one place, then we can eliminate all this duplication.  This prize comes at a price.  The price is control.  For convenience sake we must rip the control of the creation of button from our UI screens and give them to a central authority who doles out buttons at this whim.

We need a mediator that says to our UI screen classes, “You want a button?  I got button for ya, and you don’t need to know what kind, cause it’s none of your business.  All you need to know is it’ll be a button.”

I’ll harp on this more later, but I want to bring up the point here before we move on.  Why would you use inversion of control when you don’t have a problem of needing to select from more than one implementation of an interface or class?

## How can I gets me some of this IoC goodness?

So let’s talk about how to implement inversion of control.

I’m not going to cover every conceivable way that you could invert control in regards to creation, but I will focus on a few common ways to do so.

Factory pattern

Let’s start here since we already mentioned it.  In our case above we could easily create a button factory that would be able to hand us the correct kind of button.  We would put all the logic for determining which button to use inside of the factory and let it hand us the correct kind of button.  Our code from the above example would be simplified to:

```Button button = ButtonFactory.CreateButton();
```

Service locator

A service locator is very much like a factory.  You can even use a service locator to get you the right kind of factory that can make your object.  The difference is a factory creates a specific kind of object and a service locator can create different kinds of objects.

Using a service locator pattern, we might rewrite our example as:

```Button button = ServiceLocator.Create(Button.class)
```

(We could also use generics here and do Create<Button>())

Dependency injection

(Notice I’m not saying IoC container based dependency injection.  I’m talking about dependency injection in general here.  You can use an IoC container to do it, but it is not required.)

This is the one most people are most familiar with.  We can use a special class that maps interfaces to their implementations, and then injects those implementations into our class via one of several ways.

Some of the common ways to inject the implementation into the class are:

• Constructor injection – pass the dependency into your class via the constructor.
• Setter injection – pass the dependency into your class by setting it on your class.
• Interface injection – implement an interface that allows another class to call your class to give it the dependencies.

Using constructor based dependency injection we could rewrite our example like:

```Button button = GetTheRightDangButton();
OurScreen ourScreen = new OutScreen(button);
```

Notice in this example, we are showing how to create our class.  We don’t have to do anything special inside our class besides provide a constructor that take our dependency.

## But dependency injection looks silly in this example

Right you are!

What if we have more than one button we are going to use in our screen?  Seems pretty silly to put a single button in the constructor.

If you already know about IoC and dependency injection, there are two important things you should be taking away from this look back to the basics of creational IoC patterns.

1. IoC doesn’t even require interfaces, it just requires that an object be created outside of the class that uses it.
2. IoC does not always mean IoC container and dependency injection.  Many times the best solution is to directly instantiate an object or use a factory or service locator.  (Especially when you are using multiple instances of a class.)

Before embarking on an IoC journey, it is important to understand what problem you are trying to solve by introducing IoC and to understand the differences between using IoC to invert interfaces, using IoC to invert flow control, and using IoC to invert creation control.

In almost all cases the correct application of creational IoC involves solving one of two possible problems.

1. I have multiple implementations of an interface or class and I need to be able to have the logic to choose which one all in one place.
2. I have only one implementation of an interface or a class, but I need to completely decouple the interface from the implementation for a really good reason.  (A really good reason is something like: “I am writing the code that will use the implementation of this interface, but another company is going to provide the implementation in another library.)

As always, you can subscribe to this RSS feed to follow my posts on Making the Complex Simple.  Feel free to check out ElegantCode.com where I post about the topic of writing elegant code about once a week.  Also, you can follow me on twitter here.

# Back to Basics: Understanding IoC

In my last back to basics post, we talked about dependency inversion and how it is the underlying principle that Inversion of Control or IoC is based upon.

We also talked a little about IoC and the three main forms of control that can be inverted; interface, flow, and creation.

In this post I want to dive a little deeper into IoC as it relates to interfaces and creation, we’ll ignore flow inversion for now, because it is often used in a way that is completely unrelated to the other two.

We’ll focus more on creation since we already covered interface inversion a bit and creation inversion is the “hot thing” right now.

## Interface inversion

Why would we want to invert interfaces?

We covered much of the reasoning in my post on dependency inversion, but there are some important points that specifically relate to interfaces in programming that I would like to address here.

First we must understand what non-inverted interfaces are.

Let’s take an example.  Suppose I have a Kangaroo class that has several methods, among them is Punch. Normally, Kangaroo defines the interface that has to be used if you want to use the Kangaroo class.

By default the constructor for the class and the method signatures all make up an interface that the user of the Kangaroo class must adhere to in order to use the Kangaroo class.  The module that contains the Kangaroo class usually also would include the interfaces if one is defined.  In this case the interface would be something like IKangaroo.

Why do I say this is normal and not inverted?  Because it makes sense that a class would define what it does and how to use it, not some other entity.

An inversion would be if some other class or module told Kangaroo what its interface needed to be.  Kangaroo would stop being an independent definer of a service it has to offer and instead become a provider of a service someone else wants.

From our narrow viewpoint of the Kangaroo class we don’t know what kind of interface to provide, so we have to assume it is something like IKangaroo.  The way we look at the Kangaroo class can completely change though, if we consider the BoxingMatch class.

If we create a BoxingMatch class, we really want that class to define the interface that all boxers must adhere to.  We don’t want to have to have the BoxingMatch class know how to make multiple classes with different interfaces box.  So within the module that the BoxingMatch class is defined, we define an interface called IBoxer.  Within the BoxingMatch class itself we use IBoxer instead of specific classes that we want to use in our boxing match.

Now we have given Kangaroo a suitable interface to implement.  IKangaroo hardly makes sense, but IBoxer gives our Kangaroo a purpose.  Our little Kangaroo class is providing a service.  He can box!

In this example you can see that we have inverted the control of the interface.  Instead of letting the provider of the interface define that interface, we have made the user of the interface define it.

The value we gained here is that we have given control to the higher level module and reduced the complexity of its logic by preventing it from having to deal with every different interface defined by classes that can box.

## Up next creation inversion

I’m going to end this post here, since creation inversion is going to be a longer topic, and I am on vacation right now.

In my next post we’ll cover creation inversion and talk about how it relates to Dependency Injection or DI.

# Book Review: Enterprise Integration Patterns

So I’ve had Enterprise Integration Patterns sitting on my bookshelf for quite a while now.  I had skimmed it a few times, but never really gave it a read.

It’s a hefty book that you could definitely use to cause some major kidney trauma to an unsuspecting DBA if you sneak up on him from behind and jab the pointy end of the book into his unprotected backside.

I finally got around to reading this because it is one of my last remaining analog books.  It is part of my quest to cleanse my life of all possessions that are not digital or are not monitors.

This book is really about messaging.  Don’t let the title fool you.  Definitely consider picking up this book if you want to use a messaging platform like BizTalk, MSMQ, or JMS to integrate several applications together.

The good:

• This book is extremely detailed about each messaging pattern, when to use it, and how to implement it.  If you are seriously going to consider implementing a messaging solution, you need this book.  Honestly, I have done some messaging without it and now that I have read it, I feel like I really missed the whole point before.
• Multi-language / technology.  This book is generalized enough to not push you in a language or technology decision, but has specific examples in Java and C#.
• Simple to understand.  I was rushing through this book because I wanted to get through it and I found that I was picking up pretty much every concept being thrown.
• Excellent reference.  I can see using this book in the future to go back and solve some sort of problem dealing with messaging.
• Broken down into perfect size pieces.  If you read this book, you should have all the tools you need to solve any kind of complex messaging scenario.  By thinking of messaging in terms of the patterns or blocks in this book, very complex problems become much simpler.

• It is freaking long.  Seriously.  This is a long book.  It has some diagrams and some code, but it is long.  Get ready for an adventure.
• It’s a little dry, probably because it is so long.  Some of the code examples are a bit repetitive, and no one ever wants to see XML soap bindings on pages in a book.

What I learned:

I have to say that I really did learn a large amount of information from this book.  I really feel like I got a good understanding of how to apply messaging patterns to various sorts of problems.

I feel like this book gave me a really big toolbox of all the possible tools that I would need to solve any messaging pattern.

I also learned just how easy it is to use MSMQ and JMS and throw messages on a pipe.  It’s really not that bad.

After having used BizTalk a while ago and not really understanding what it was or what it was trying to do besides allow you to change file formats between a bunch of different clients, I feel like this book definitely opened up my eyes to the true value of a solution like that.  If I had a BizTalk project now, I am sure I would be much more effective after reading this book.

Overall, I would definitely recommend that every developer that is working with messaging read this book.  Even if you are not, I would still recommend reading it so that you can have your eyes opened up to how messaging can solve many of the problems we try to solve in create-ftp-batch-cron-jobish ways.