I spent a good time last night troubleshooting a “works on my machine” problem.
It takes pain to learn something; this pain perhaps was good. It reminded me of a concept that is really important in your software development infrastructure.
I have three golden rules of development environments and deployment:
- I should be able to run a software build locally that is the exact same build that will be run on the continuous integration server.
- Only bits that were built by the continuous integration server are deployed and the same bits are deployed to each environment.
- Differences in configuration in environments only exist in one place and on that machine.
You don’t have to follow these rules, but if you don’t, most likely you will experience some pain at some point.
If you are building a new deployment system, or you are revamping an old one, keeping these 3 rules in mind can save you a large amount of headache down the road.
I think it is worth talking about each rule and why I think it is important.
Rule 1: Local Build = Build Server Build
If you want your continuous integration environment to be successful, it needs to appropriately report failures and successes.
If your build server reports failures that are false, no one will believe the build server and you will be troubleshooting problems that are build configuration related instead of actual software problems. Troubleshooting these kinds of problems provides absolutely no business value. It is just a time sink.
If you report false successes when you deploy the code to another environment, you will discover the issue, will be wasting time deploying broken code, and you will have a long feedback loop for fixing it.
As a developer, I should be able to run the exact same command the build server will run when I check in my code. I would even recommend setting up a local version of the continuous integration server your company is using.
By being able to be confident that a software build will not fail on the build server or during deployment if it doesn’t fail when running it locally, you will prevent yourself from ever troubleshooting a false build failure. (The deployment still could fail, and the application could still exhibit different behavior on different environments, but at least you will know that you are building the exact same bits using the exact same process.)
Rule 2: Software Build Server Bits = Only Deployed Bits
Build it once and deploy those bits everywhere. Why?
Because it is a waste of time to build what should be the exact same bits more than once.
Because the only way to be sure the exact same code gets deployed to each environment (dev, test, staging, production, etc.), is to make sure that the exact same bits are deployed in each environment.
The exact same SVN number is not good enough, because an application is more than source code. You can build the exact same source code on two different machines and get a totally different application. No, I’m not loony. This is a true statement. Different libraries on a machine could produce different binaries. A different compiler could produce different binaries.
Don’t take a risk. If you want to be sure that code you tested in QA will work in production exactly the same way, make sure it is the exact same code.
This means you can’t package your configuration with the deployment package. Yes, I know you always have done it that way. Yes, I know it is painful to figure out another way, but the time it will save you by never having to question the bits of a deployment ever again will be worth it.
Rule 3: Environment Configuration Resides in Environment
Obeying this rule will save you a huge amount of grief.
Think about it.
If the only thing different in each environment is in one place, in one file in that environment, how easy will it be to tell what is different?
I know there are a lot of fancy schemes for adding configuration to the deployment package based on what environment the deployment is going to. I have written at least 3 of those systems myself.
But, they always fail somewhere down the line and you spend hours tracing through them to figure out what went wrong and ask yourself “how the heck did this work again?”
By making the configuration for an environment live in the environment and in one place, you take the responsibility of managing the configuration away from the software build process and put it in one known place.