By John Sonmez January 15, 2010

Inheritance is Inherently Evil

Perhaps I'm going against the grain here, but I'll make a bold statement and say “derived classes are a code smell.

Let me give you a rather contrived example of some evilness:

This is very derived. I know, it's intentional. I want to make it really clear what is going to happen is that sometimes the nuclear bomb will blow up. Why does a class named MySquare create nuclear bombs? I don't actually know any good reason except to serve as an example.

The point is this. Class level inheritance violates the Open Closed Principle at minimum, and in my opinion also violates encapsulation. You can see in this example that sometimes Setup will be called twice, and when that happens the nuclear bomb will be detonated. As derived of an example this is, the point I am trying to make here is that as the programmer writing the derived class, I have to actually know the internal implementation of the base class in order to correctly write my derived class. In addition, any changes in the base class could cause my derived class to break, even if I write my derived class as perfectly as I can. I simply have no control over what the base class does.

Many people at this point will say “Yeah yeah, but you are using a contrived example where you are writing bad code.”  True, I agree, I am.  (BTW, the example is in Java, but it easily can apply to C# as well.)  You can also say that I should only use inheritance if I control the base class and the derived class in the same package.  Joshua Bloch advocates this in Effective Java in Item 16: Favor composition over inheritance.  The problem even in the case where you do control the base class and the derived class is that when you are modifying the base class, you have to retest all the derived classes, because your internal implementation change in the base class may have broken the derived classes.  In a class of any level of complexity, you can not implicitly know that you didn't break anything.

Jon Skeet even blogged about this a while back, although in fairness he did call it a wacky idea.  I am just wondering if it is so wacky afterall.

The problem

Let me my summarize my main reasons for disliking class level inheritance.

  1. It violates OCP (Open Closed Principle) for the base class and the derived class.  Neither class is closed for changes from either class.

  2. It violates encapsulation principles for the base class, since the derived class depends and must know the implementation details of the base class.

  3. It hides the behavior in a way that is not obvious to a person looking at a derived class.  When you are reading the code in a derived class, you have to constantly go back to the base class to understand how the derived class will actually function.  Imagine the example where you have a base class with 10 methods in it and a derived class that overrides just one of those methods.  When you look at the source code of the derived class, you really have no clue what the class actually does.  You have to go up to the base class to find out.  Add more layers of inheritance, and you start to feel a large amount of pain really quickly.  Contrast that to using interfaces to make your class hierarchy and you can see that the interface method would require each method to be at least stated in the derived class.

  4. It is difficult to test.  Unit testing inheritance hierarchies is tricky at best, and most of the solutions for doing it make you feel dirty and wrong.

  5. It gives a false sense of actually modeling the real world or domain, when the real world doesn't actually have true inheritance hierarchies.  (I don't want to dive into this rabbit hole right now, but perhaps in a future post I'll cover this topic.)

If not class level inheritance, then what?

Interfaces and interface inheritance combined with composition will achieve anything you can achieve with class level inheritance.  Again, a bold statement, but prove me wrong.  I will stand by this until someone can show a valid example where this is not the case.

There are many patterns for achieving this, and there are many names for this from delegation, to decorator pattern, to composition.

I'll show a simple way to fix the contrived example I had used above.

Slightly more code, but the intent is much more clear and there is no nuclear holocaust.

Update: I just found this.  It appears the creator of Java agrees with me.  So take that you haters! :P

About the author

John Sonmez

John Sonmez is the founder of Simple Programmer and a life coach for software developers. He is the best selling author of the book "Soft Skills: The Software Developer's Life Manual."