Category Archives: Tools

Which Cross Platform Mobile Development Platform Should You Choose?

I’m in the unique position of having developed with almost all of the major cross platform mobile development solutions.

I’ve published courses for Pluralsight on:

After working with all these different solutions and investigating others, I thought I would publish my thoughts on each of these choices and the differences between them.

I’m mostly going to focus on Android and iOS because even though there are other competitors, those are the only major players that exist at present.  Everyone else has a relatively tiny market share.

Native development

The most obvious way to build mobile applications is to use the native tools that come with the platform.

For Android, it is Java and either Eclipse or the new Android Studio, along with the Android SDK.

For iOS, it is Objective-C and XCode.

2013 06 28 12 46 50 thumb Which Cross Platform Mobile Development Platform Should You Choose?

For Windows Phone it would be C# and Visual Studio.

I built my first mobile applications for iOS and Android natively.  I started out with an Android version of my application and then ported over most of the code and design to iOS.

This was a fairly difficult process and I did not have the ability to share any code. I had to learn both platforms along with their SDKs and I had to learn Objective-C, because I didn’t know Objective-C or anything really about Mac development before I started writing my first iOS application.

In general, I wouldn’t recommend this approach because you are going to waste a large amount of time maintaining two completely separate code bases and you really don’t gain much by using the native tools.

However, I would recommend anyone seriously thinking about cross platform mobile development  to at least develop a simple app natively in both Android and iOS.  The reason for doing this is because it will make it easier for you to understand what is going on under the abstraction layer that a cross platform mobile development solution will provide you and it will help you to see the value or lack of value in a cross platform solution.

Xamarin Tools

The Xamarin tools basically allow you to develop an Android or iOS application with C# and share a good amount of the code.

2013 06 28 12 49 18 thumb Which Cross Platform Mobile Development Platform Should You Choose?

When you write an application using the Xamarin tools you are basically using an abstraction on top of the real SDKs for iOS and Android.

What this means is that you will end up with a fully native application with a fully native user interface on each platform.

This also means that you will be limited to some degree in the amount of code you can share between the platforms.

Typically when I develop an application using the Xamarin tools, I will build a core of the application that will be shared code and have the iOS, Android, and even Windows Phone versions of the application depend on this core library.

With this approach you may be able to reuse somewhere around 60-70% of your code without even trying very hard.

But you can take things further and either develop your own abstractions using an architecture like MVC or MVVM to make it so the only code you are not reusing is just the actual views themselves, or you can use a framework that does this for you like MVVM cross.  This approach is, of course, a little more difficult to get started with but can provide a much higher percentage of code reuse, perhaps around 80-90%.

As for the tooling, the Xamarin tools are awesome!

Xamarin has its own IDE called Xamarin Studio.  This IDE is cross platform and is very well designed and easy to use.

The Xamarin tools also have a plugin for Visual Studio which allows you to develop your application in Visual Studio.  You can even develop an iOS application from Visual Studio, but you still need a Mac to perform the build.  (The tool uses a remote call to the Mac to perform the build.)

Xamarin also recently introduced a component store which makes it easy to find reusable components directly from Xamarin Studio and plug them into your application.



PhoneGap is probably the next most well known cross platform mobile development solution, but it is also somewhat confusing.

2013 06 28 12 51 06 thumb Which Cross Platform Mobile Development Platform Should You Choose?

PhoneGap is basically a set of JavaScript APIs that allow you to access the native capabilities of your device.  It also is a wrapper that lets you build a web application that is locally installed to the device.

When you build an application using PhoneGap, you are essentially building a mobile web site using HTML5 and JavaScript, just like you would build any other web site today, but you are putting the HTML and JavaScript on the phone.

PhoneGap applications run on the local browser on the phone and have some hooks into the native libraries which are exposed to you through their JavaScript APIs.

What does all this mean?

Well, it means that if you are developing a PhoneGap application, you can develop it just like a cross platform mobile web site.  You can use any mobile framework you like, for example Sencha Touch, or JQuery Mobile, etc.

You will for the most part be able to share just about all of your code since your application will be HTML and JavaScript, but you will not be writing a native application.

Because your PhoneGap application will be running in a browser it will be more like a web application than a native application.  The user interface you design will not use the native controls and will be subject to the limits and speed of a web browser.

This also means that you might have to write some platform specific code to make up for differences between the browsers, but you can basically assume that you will be able to share most of the code.

The tooling for PhoneGap depends entirely on the environment you want to build the app with.  You can develop in whatever environment you would like and basically use a plugin for the IDE in most cases.  There are quite a few manual steps, so getting setup is not that easy.

One big benefit to PhoneGap though is PhoneGap build, which allows you to upload your project in whatever environment you created it in, but build it automatically for the other platforms.


Appcelerator Titanium

I want to mention this platform next because many developers confuse this with PhoneGap.

2013 06 28 12 52 58 thumb Which Cross Platform Mobile Development Platform Should You Choose?

Appcelerator Titanium is completely different than PhoneGap.  The only similarity is the language that is used is JavaScript.  Everything else is completely different.

With Appcelerator Titanium you use a cross platform mobile development custom API to build your application.  This is different than PhoneGap or Xamarin, because with Xamarin you use a wrapper around the real native SDKs and with PhoneGap you use whatever you want to build an HTML5 web application.

With Titanium you actually write all your code against their SDK which includes UI components as well.  So this means that when you write a Titanium application you actually can write a cross platform user interface.

Appcelerator Titanium apps are actually compiled down to completely native applications that use the real native controls for the platform.

For example, in Titanium you can programmatically declare a button and specify its layout and some attributes about that button.  When you compile your application, the button will appear as a real native Android button on Android and a real native iOS button on iOS.

Does this mean that you can build a completely cross platform application including the UI with 100% code reuse and do it in JavaScript?

Maybe, but highly unlikely.  Many of the UI elements and interaction paradigms are cross platform, but parts are not.  For example, in iOS you have the idea of a Navigation Controller which keeps track of the history of what screens you navigated through and lets you go back; Android doesn’t have such a control.  But, Titanium does have support for platform specific controls, it just means that you have to make some of your code conditional based on the platform.

All this is to say that you can program to the lowest common denominator and get a fully cross platform application with close to 100% code reuse, but even though you’ll have native control, the result might not look that great.

The reality is, if you are using Titanium, you’ll probably want to tailor some parts of the application to the specific platforms.

Titanium actually has some really good tooling.  There is a Titanium IDE which is actually pretty nice and does a really decent job of auto-completing JavaScript code, especially in regards to the Titanium APIs.  The build process from the IDE is pretty simple and even lets you build a web application out of the same codebase.  There is also a marketplace that has components you can use and purchase for your applications.

Titanium recently introduced an MVC framework called Alloy, which greatly simplifies creating Titanium applications and takes out the tedium of programmatically creating all the user interfaces.  With this framework, you declare your user interface using an XML markup, which is pretty straight forward.  You then use controller classes to populate and interact with the UI.  It also has the concept of style sheets which are very similar to CSS.

One of the most impressive things about Titanium though, is its cloud offering.  Titanium basically lets you have access to their complete backend of cloud services which allow you to easily create what can be best described as Facebook-like functionality without having to code your own backend.  You can use the cloud services to manage users, authenticate them, store data about the users, like social graphs and even just store key value pairs.  I was really impressed by this functionality.


Appcelerator Titanium Application Development by Example Beginner’s Guide (Darren Cope)

Appcelerator Titanium: Patterns and Best Practice (Boydlee Pollentine, Trevor Ward)

More cross platform mobile development options?

There are obviously many more options out there, but I picked these three for standard application development because from my experience these are the most serious widely used offerings.

These 3 offerings also encompass just about all the ways to do cross platform mobile development:

  • Shared code, but separate and native UI (Xamarin)
  • HTML5 App running locally (PhoneGap)
  • Fully shared code native app (Titanium)

There are obviously trade-offs to each of these approaches and nothing is quite perfect, but I do consider all of these good solutions at this point.

In general, I prefer the Xamarin approach because I like having control over the native user interface completely and I like being able to develop in C#.

If I were to develop in PhoneGap today though, I’d most likely use Icenium, which is basically an IDE and set of build and testing tools built around Cordova (the open source part of PhoneGap,) that makes it much easier to develop in and deploy.

Don’t forget to check out my Pluralsight courses if you want to learn how to get up and running quickly with some of these mobile development frameworks.

Getting Started With Google’s Dart Language

I was a little skeptical of the Dart language when Google first announced it.

When I looked at the syntax of the language I thought that it didn’t really seem to offer anything new.

Why create another language that is not very different than what we already have?

How is this actually much better than JavaScript?

But after having worked with Dart now for quite awhile and producing a Pluralsight course on Dart, I’ve completely changed my mind.

The Dart language is awesome!

What makes the Dart language so awesome is all the little subtleties the language designers added to the language, not any major new concepts or ideas.

When I started writing Dart code it felt exactly right.  It felt like all the little annoyances that I had with languages like C#, Java and JavaScript were removed by the Dart language.

In fact, the real beauty of Dart is that if you already know C# or Java and JavaScript, you’ll probably be able to learn enough about the Dart language to be productive in Dart in less than an hour.

Before I show you just how easy it is to get started, let me briefly tell you what the Dart language is:

  • Object oriented.  Everything is an object including primitives and nulls
  • Optionally typed.  You can add type annotations which static checking tools can use to help you find errors in your code, but you don’t have to use them.
  • Interpreted.  Dart is run in a VM, but it is not compiled first.  Round-trip time for making changes is very short.
  • Compatible with JavaScript.  You can compile your Dart code to JavaScript and run Dart applications in any modern browser.
  • FAST!  Dart is very fast, much faster than JavaScript in just about every test.

Some cool language features that I like about the Dart language:

  • Mixins.  Instead of using inheritance, you can use a mixin to add functionality to a class without directly inheriting from another class.
  • Isolates.  Instead of threads, the Dart language uses isolates for concurrency.  Isolates can’t actually share any memory, they pass information though messages.  It is very hard to shoot yourself in the foot.
  • Simplified built-in types.  Numbers can be either int or double, and you can just use num, if you don’t care.  Lists and maps can be declared as literals.  An array is just a special case of a list.
  • Functions are first-class objects.  You can pass them around just like any other object.  There is even a lambda-like short-hand for creating a one-liner function.
  • Top level functions and variables.  Don’t want to put a function or variable in a class?  Good, you don’t have to.  In the Dart language, you can declare them anywhere you want.
  • Simplified classes.  There is short-hand for declaring constructors that assign parameters to class members.  Class members don’t have protected, private, public.  Member variables are automatically properties.
  • String interpolation.  No more string format methods, just use the $variableName in a string to have its value expanded.

Getting setup with the Dart language

Ready to get running in 5 minutes?

Ok, read on.

Step 1: Go to and click “Get started.”

2013 06 16 11 12 28 thumb Getting Started With Google’s Dart Language

Step 2: Download Dart (64-bit or 32-bit.)  Unzip the file and copy the “dart” folder to where you want Dart installed.

This folder will contain the Dart Editor, the Dart SDK and the Chromium web browser which has a built-in Dart VM.

Step 3: Run DartEditor.exe

Dart Editor 2013 06 16 11 18 01 thumb Getting Started With Google’s Dart Language

That is it, now you are ready to rock some Dart code!

Creating your first Dart language App

The Dart language can actually be used outside of a browser, just like you can use JavaScript with Node.js.  But, most developers will probably want to use Dart the same way we use JavaScript in a web application today.

I’m going to walk you through a real simple example that will show you how to create a basic Dart application that is able to respond to a button click and manipulate some DOM data.  For more advanced examples, you can check out my recently released Pluralsight course on Creating Web Applications with Dart. (I will plug this one more time before this post is over… wait for it…)

Step 1:

Go to File –> New Application.

Fill in your application name.  I’ll call mine HelloWorldDartWeb.

Leave “Generate sample content” checked.

Select “Web application.”

2013 06 16 11 24 03 thumb Getting Started With Google’s Dart Language

Step 2:

Open the helloworlddartweb.html file and clear out everything in the body element except for the two script tags at the bottom.

The first script tag imports our actual Dart file, just like you would use to add JavaScript to a page.

The second script adds Dart support to the browser.

Step 3:

Add the following HTML to the body tag in the helloworlddartweb.html file:

 <button id="theButton" >Press Me!</button>
 <div id="resultDiv"></div>

This will just create a button and a div.  We are going to add some Dart code to respond to a button click and populate the div with some text.

Step 4:

Open the helloworlddartweb.dart file and clear out everything in main() and delete the reverseText function.

Notice that there are only two things we really will need in our .dart file.  Just an import ‘dart:html’, to import that html library for Dart, and the main function, which executes as soon the DOM content is loaded on the page.

Step 5:

Edit the helloworldweb.dart file to make it look like this:

import 'dart:html';

void main() {
  var button = query("#theButton");

void addResult(Event e) {
  var resultDiv = query("#resultDiv");
  resultDiv.text = "You clicked the button";

This code simply gets the button using a CSS selector.  It uses the query function to do this.

Then we register the addResult function as an event handler for the onClick event for the button.

In the addResult function, we simply query for the resultDiv and change its text.

After you run this example, you should see a result like this:

HelloWorldDartWeb   Chromium 2013 06 16 11 47 56 thumb Getting Started With Google’s Dart Language

Step 6:

Now change the Dart code to look like this:

import 'dart:html';

void main() {
        (e) => query("#resultDiv").text = "You clicked the button!"

Try running the code again and you should see it works exactly as before.  Here we just shortened the code to a single line by using the short-hand function syntax.

Going further with the Dart language

So, that is just the basics of Dart.  I wanted to show you how to get started really quickly, but I am sure there is more you will want to learn about Dart.

We can of course do much more with Dart, especially when building web applications.  There is a Dart Web UI library which can be used to do templating and data binding so we can simplify our Dart code even further.

The language itself is pretty simple.  Most C# and Java developers, as well as JavaScript developers, should be able to read and understand Dart code without any assistance.  But here is a link to an overview of the language.

If you are looking for a more in-depth coverage of the Dart language and want to see how to build a real application with Dart, check out my Introduction To Building Web Applications With Dart course on Pluralsight, where I go over the entire language and guide you through building a real application, as well as cover some of the more advanced features like mixins and isolates.

Also, I could only find two books on the Dart Language.

I don’t know if Dart will end up replacing JavaScript, but I do think Dart has the potential.  It really is an awesome language that is fun to develop in.

That is strong praise coming from me, since I really tend to dislike dynamic languages.  The creators of Dart have really done a good job of creating a language that is succinct, fast, easy to work with and has the best advantages of strongly typed languages with all the flexibility of dynamic languages like JavaScript.

Get Up and CODE and YouTube Videos

For those of you who frequent my blog and are looking for my latest Get Up and CODE podcast episode and YouTube video for the week, I have a bit of an announcement.

I am going to start posting these blog posts every Monday.

The YouTube videos will be going up every Wednesday.

The Get Up and CODE podcast will be coming on every Friday.

When my new website design is done, you’ll be able to find the latest episodes of each on the side bar, so I’ll stop including them in each weekly post.

But here is last week Get Up and CODE, where Iris and I talk about basic weight training.

Get Up And Code 006: Basic Weight Training

play audio Getting Started With Google’s Dart Language

Getting Up to BAT: Picking a Browser Automation Tool

Now that you’ve gotten an “Automation Lead” for your BAT (Black-box Automated Testing,) it’s time to make a very important decision.

It is time to pick the browser automation tool you are going to use for driving your automation efforts.

Before we can build an automation framework, we need a basic set of tools to build it on.  We want our automation framework to be tailored to our application under test specifically.  Our automation framework will act like a Domain Specific Language (DSL) for testing our application, but we need to build that automation framework on top of something.

What is a browser automation tool?

It is essentially what we will use to drive the web browser.

older driver thumb Getting Up to BAT: Picking a Browser Automation Tool

We need to pick a tool or framework that will easily allow us to interact with the web browser so we don’t have to write that code ourselves.

We will need to be able to click buttons, enter text, select drop downs, and check values on web pages.

Some of the things we will want to consider when choosing a browser automation tool include:

  • Language we can use with the driver
  • Browsers it supports
  • How we interact with the browser (do we use XPath, JQuery, or some other mechanism)
  • If we can access and manipulate everything we need to be able to in the browser
  • Speed of execution
  • Ability to execute in parallel
  • Support and future development

Let’s take a quick look at each of these considerations.

What language can we use?

This is an important consideration because you will want to be able to utilize your other programming resources to help with building an automation framework and to eventually create automation tests.

If you pick a browser driver that supports a language your regular programmers don’t like, or don’t know, you may not end up with their support, and you are definitely going to want their support.

What browsers does it support?

This is a tricky topic to discuss.  It might seem that a browser automation tool would need to support all of the browsers your application supports, but that is not really true.

You really have to think about what your goal is with your BATs.  I could certainly devote a whole blog post to this topic, but I will try and summarize my main point here.

BATs should be designed to test functionality of the application, not to test compatibility.  (At least most of your BATs should be, although you might have some that are specifically for compatibility.)  With that goal in mind, a browser driver supporting each and every browser your application supports is not really necessary.

You do need to consider that the browser automation tool should support at least one of your supported browsers, preferably the main one.

How do we interact with the browser?

There are several different APIs which are used to communicate with the web browser that are employed by various different browser drivers.

Some tools have APIs that are very low level and allow you to very easily manipulate objects in the web browser while others are at a much higher level and rely more on your understanding of the brower’s DOM.

For example, one API might expose elements on a web page by their actual element name and let you interact with them directly.



In the first example, the API is aware of element types like buttons.  In the second example, the API is more generic and will send a “click” to an element with the ID value you specify.

You’ll have to decide how important the ease of use is to your project at this level.  There are some tradeoffs to consider here.

The lower level the API is, the easier it will be to use, but the more dependent you will be on that API and the more language specific that API will be.

The higher level the API is, the harder it will be to use, but it will free you from depending so much on the API, because you will be interacting more directly with the browser’s DOM.

I prefer lower level APIs because I find that it is much easier to write the framework code on top of these.

You will also want to consider here the skill levels of the programmers who will be creating the framework.  If your API is lower level, native language skills are more important.  If it is higher level, DOM and HTML, and perhaps Javascript skills, are more important.

Can we access what we need?

If your application makes use of Javascript pop-up dialogs and your browser automation tool doesn’t give you a way to interact with them, you don’t want to find this out when you have already invested significantly into your automation framework.

Different automation frameworks support different features of browser; not all of them will allow you to interact with every single browser feature that exists.  As new features are introduced, you could also get stuck with a tool that is no longer being maintained and doesn’t add support for the new features.

You will want to consider things like how much your application uses AJAX or JQuery.  You will want to select a browser automation tool which would make it easy to interact with AJAX calls if your application relies on them heavily.

Speed of execution

Not all browser automation tools are the same in execution speed either.

This may not be important to you, depending on how you address the issue of concurrency, but you should at least consider that if you have a large number of automated tests, a small difference in execution speed can result in a large difference in total time to run the tests.

The faster the automated tests can be run, the more valuable they are.  I won’t get into the details here but there are many reason why this is true.

Also influencing the speed in which you can run your tests will be if you automation tool choice requires you to put pauses into the test to wait for the browser instead of responding to events that occur in the browser.

Ability to execute in parallel

This consideration will depend greatly on the volume of and speed in which your tests can execute.

At some time (sooner than you think) you will likely get to the point where you can no longer run all of your automated tests in 24 hours.  Perhaps even before this point, you will want to consider running your BATs in parallel to reduce the total time to execute the tests.

Some automation tools have built-in support and others will require you to build your own way to do this.  It is good to at least have an idea of how you are going to achieve parallel execution when considering what browser automation tool to use.

Support and future development

We are in a pretty rapid pace of browser development.  Many things in the web browser world are changing much more rapidly than ever before.  Tools to automate the browser must also change or you could end up in a very bad place.

If your application is going to take advantage of the latest browsers and browser features, you should make sure the automation tool you choose is in active development.

There is nothing worse than investing into an open-source project and then having that project die, resulting in you having to rip it out to replace it with another library.

You can always design your automation framework to be abstracted away as much as possible from the underlying browser driver, but that will be extra work, so consider this point carefully.

Name names sir!

Nope, I’m not going to do it.

I’m not going to tell you to use Watin or Watij or Selenium or WebAii or any other browser driver.  I don’t want to put the focus on the tool, since the real focus will be building the automation framework on top of the tool you use to drive the browser.

I would suggest that you try writing a few simple tests in each of the major browser drivers so that you can get a good feel of what the API is like and how it will work with your application.

I will say that I have used most of the major choices out there and there really is no clear winner in my mind.  It really is going to depend on what your environment is like and what kind of application you are testing.

I would also recommend picking a browser driver and sticking to it.  I’ve tried in the past to abstract away the browser driver from the automation framework and while it is possible, it can become quite messy and add quite a bit of overhead to your project.

My only other hints would be to not put too much emphasis on supporting multiple browsers or on using recording tools.  Neither of these things will benefit you much in the long run because you will find that you will not want to try and run all of your BATs on each browser. Recording tools will not be nearly as effective as writing your own custom framework (which I will talk about in my next post.)

As always, you can subscribe to this RSS feed to follow my posts on Making the Complex Simple.  Feel free to check out where I post about the topic of writing elegant code about once a week.  Also, you can follow me on twitter here.

Living Dangerously: Refactoring without a Safety Net

It’s usually a good idea to have unit tests in place before refactoring some code.

I’m going to go against the grain here today though and tell you that it is not always required.

Many times code that should be refactored doesn’t get refactored due to the myth that you must always have unit tests in place before refactoring.

In many cases the same code stays unimproved over many revisions because the effort of creating the unit tests needed to refactor it is too high.

I think this is a shame because it is not always necessary to have unit tests in place before refactoring.

manonwire3 thumb Living Dangerously: Refactoring without a Safety Net

Forgoing the safety net

If you go to the circus, you will notice that some acts always have a safety net below because the stunt is so dangerous that there is always a chance of failure.

You’ll also notice that some acts don’t have a safety net because even though there is risk of danger, it is extremely small, because of the training of the performers.

Today I’m going to talk about some of the instances where you don’t necessarily need to have a safety net in place before doing the refactor.

Automatic refactoring

This is an easy one that should be fairly obvious.  If you use a modern IDE like Visual Studio, Eclipse, or IntelliJ, you will no doubt have seen what I call “right-click refactor” options.

Any of these automatic refactors are pretty much safe to do anytime without any worry of changing functionality.  These kinds of automated refactors simply apply an algorithm to the code to produce the desired result and in almost all cases do not change functionality.

These refactoring tools you can trust because there is not a chance for human error.

Any time you have the option of using an automatic refactoring, do it!  It just makes sense, even if you have unit tests.  I am always surprised when I pair up with someone and they are manually refactoring things like “extract method” or “rename.”

Most of the time everything you want to do to some code can be found in one of the automatic refactoring menus.

Small step refactors

While not as safe as automatic refactors, if you have a refactor that is a very small step, there is a much higher chance your brain can understand it and prevent any side effects.

A good example of this would be my post on refactoring the removal of conditions.

The general idea is that if you can make very simple small steps that are so trivial that there is almost no chance of mistake, then you can end up making a big refactor as the net effect of those little changes.

This one is a judgment call.  It is up to you to decide if what you are doing is a small step or not.

I do find that if I want to do a refactor that isn’t a small step refactor, I can usually break it down into a series of small steps that I can feel pretty confident in.  (Most of the time these will be automated refactors anyway.)

Turning methods into classes

I hate huge classes.  Many times everyone is afraid to take stuff out of a huge class because it is likely to break and it would take years to write unit tests for that class.

One simple step, which greatly improves the architecture and lets you eventually create unit tests, is to take a big ol’ chunk of that class, move it to a new class, and keep all the logic in there exactly how it is.

It’s not always totally clean, you might have to pass in some dependencies to the new method or new class constructor, but if you can do it, it can be an easy and safe refactor that will allow you to write unit tests for the new class.

Obviously this one is slightly more dangerous than the other two I have mentioned before, but it also is one that has a huge “bang for your buck.”

Unit tests, or test code themselves

Another obvious one.  Unless you are going to write meta-unit tests, you are going to have to live a little dangerously on this one.  You really have no choice.

I think everyone will agree that refactoring unit tests is important though.   So, how come no one is afraid to refactor unit tests?

I only include this example to make the point that you shouldn’t be so scared to refactor code without unit tests.  You probably do it pretty frequently with your unit tests.

I’m not advocating recklessness here

I know some of you are freaking out right now.

Be assured, my message is not to haphazardly refactor code without unit tests.  My message is simply to use temperance when considering a refactor.

Don’t forgo a refactor just because you are following a hard and fast rule that you need unit tests first.

Instead, I am suggesting that some refactorings are so trivial and safe that if it comes between the choice of leaving the code as it is because unit testing will take too long, or to refactor code without a safety net, don’t be a… umm… pu… wimp.  Use your brain!

Things that will bite you hard

There are a few things to watch out for, even with the automatic refactoring.  Even those can fail and cause all kinds of problems for you.

Most of these issues won’t exist in your code base unless you are doing some crazy funky stuff.

  • If you’re using dynamic in C#, or some kind of PInvoke, unsafe (pointer manipulation) or COM interop, all bets are off on things like rename.
  • Reflection.  Watch out for this one.  This can really kick you in the gonads.  If you are using reflection, changing a method name or a type could cause a failure that is only going to be seen at runtime.
  • Code generation.  Watch out for this one also.  If generated code is depending on a particular implementation of some functionality in your system, refactoring tools won’t have any idea.
  • External published interfaces.  This goes without saying, but it is so important that I will mention it here.  Watch out for other people using your published APIs.  Whether you have unit tests or not, refactoring published APIs can cause you a whole bunch of nightmares.

This list isn’t to scare you off from refactoring, but if you know any of the things in this list are in your code base, check before you do the refactor.  Make sure that the code you are refactoring won’t be affected by these kinds of things.

When to Build the Sawhorse

I love talking about tools and automating.  I’ve written about having a dedicated developer tools team, and what you should automate. This time I want to talk about choosing between what I call vertical difficulty and horizontal difficulty when solving a problem.

Horizontal difficulty

Horizontal difficulty is difficulty that is associated with just doing the work as the current structure or tooling exists at that moment.

Consider the problem of moving a washer and dryer.  If you have no tools and you just have to lift it, there is some horizontal difficulty involved.

In programming terms horizontal difficulty might look like writing a complicated SQL statement with multiple conditional joins because the data is all over the place.  Or writing a web page without using a framework because your application doesn’t have one.

Vertical difficulty

This is the difficulty associated with mainly building tools or frameworks.  It is the kind of difficulty that exists in simplifying a problem by going a layer up to “meta” solve the problem.

If you are familiar with Calculus in mathematics, Calculus is an example of what I would call vertical difficulty.  Many mathematical problems are solved through the use of Calculus by taking the level up one higher and solving the problem there.

To keep with the same example of moving a washer and dryer, the vertical difficulty would be building a cart or dolly to move the washer and dryer.  An important point here, which I will make again, is that in many cases the amount of raw effort required to build a dolly or cart, or even to figure out a way to procure one, will be equivalent to the effort required to move the washer and dryer.

In terms of code, vertical difficulty might be creating an error handling framework, creating a custom control for your web page, using views to simplify SQL data access, or even to repartition and move data to make a better model.

Where horizontal difficulty represents brute force, vertical difficulty represents mental fatigue.

What about that sawhorse?

sawhorse thumb When to Build the Sawhorse

If you are familiar with woodworking or construction, you will have no doubt seen a sawhorse.  A sawhorse is platform that can be used to hold something so you can cut it.

Sawhorses are usually constructed on the jobsite before any other work begins.


Well, have you ever tried to hold a piece of wood and cut it straight?  How about searching for different objects in your garage that you can prop the wood on so that you can get it high enough above the ground that you can put a saw through it?

Experienced craftsman build the sawhorse first.  They don’t start cutting pieces of wood and then build the sawhorse.  An experienced craftsman knows that by building the sawhorse first, he will save time by not wasting time on each cut.  His cuts will be more accurate and he might just be able to bring that sawhorse to his next job.

Every time you sit down to solve a programming problem, you should think about whether or not you should be building a sawhorse first.

Are you saying always build the sawhorse?

No, not at all.  If you are going to cut one piece of wood, do not build a sawhorse.  If you are going to cut two pieces of wood, don’t do it either.  I won’t tell you how many pieces of wood that it will take to pay off, but I will tell you 3 things:

  1. It doesn’t take many cuts for a sawhorse to pay back the time it takes to build it.
  2. The more sawhorses you build, the faster you get at building them.
  3. You are always wrong about how many cuts you are going to make.  When you estimate 3 it might end up being 20.

Vertical vs horizontal difficulty

It is very important to weigh out the pros and cons of each before making a decision which way to go.  I am, of course, going to try and lean you towards choosing vertical difficulty over horizontal most of the time, but ultimately it is up to you.

Let’s look quickly at some pros and cons for each (very generalized.)



  • Can follow a well ridden path.  Usually there is an example of how to solve the problem already.  (Someone has done it before.)
  • Less thinking, you just follow the approach and go; after some amount of hours you will be done.  (Consider copy and pasting each cell of an html table to a spreadsheet, vs writing a program to parse it.)
  • Less risk, you are very likely to get to your destination with minimal problems.


  • Boring.  This is not really going to challenge that programmer blog reading brain of yours.
  • You or someone else will be probably doing the same thing again.  Solving the problem once only helps to beat down the weeds in the trail, but it doesn’t make it shorter.
  • You might be building on top of a bad foundation.  By adding one-offs as individual solutions to the problem, the general case can become more hidden.  (If you want to solve the problem better later on, you make it harder each time you solve it the horizontal way.)



  • Simplified working space.  Once you solve a problem a vertical way, you end up building an abstraction that makes the problem seem easier at the lower level.  (Think about connectors on your motherboard vs individually connecting each wire.)
  • Reuse.  Many times when you solve a problem the vertical way, you can reuse that solution to solve future problems in almost no time at all.  (Build connector couplings for wires and next time you can just snap them together.)
  • Bigger picture understanding of the system.  When you take the time to go up a level and solve a problem, you can see the bigger picture better and can understand the system as a whole better.  This will lead to better solutions and fewer mistakes later.
  • You are developing a skill that is multi-purpose and can be applied more widely than a very specific skill which might be developed in a horizontal solution.  (Thinking about working at McDonald’s vs running several McDonald’s.)
  • Clean.  Usually you will end up with less code.  Less code means less bugs.  Changes happen in one place instead of 50.


  • It can be hard mentally.  It can require a higher level of skill.  Not everyone who can solve the problem horizontally can solve it vertically.
  • Higher risk.  If you mess up along the path of the horizontal solution, you can probably go back a few steps and fix it.  If you mess up along the vertical solution, you might have to scrap it and start over.  (Building a house vs building a microchip.)

Okay, that’s it.

Wait, what?  Did you say I forgot the biggest con of Vertical difficulty?

No, I didn’t.  I left it out on purpose.

Vertical difficulty does not always mean it takes more time.  Sometimes it is actually faster to do the vertical difficulty path even when “cutting one piece of wood.”

I have seen Perl programmers and gurus parse through text or whip up a meta-solution that can solve a problem faster than I could have done it manually once.  And they have a script around to do it again.

I have seen VI wizards edit the heck out of a text file much faster than I could point and click to do the same thing.

Scripting languages and editors like VI are designed for solving vertical problems.  When you are using VI and issuing commands to edit text, you are solving a vertical problem.  You are operating at a high level to edit a text file.

Many times you will find that the vertical solution is not only faster the first time you implement it, but it also makes the solution almost instant the next time around.

Why The IDE Has Failed Us

There is so much talk lately about using VI instead of Visual Studio, or VI in general instead of whatever IDE you normally use.

If you never had the fortune of being introduced to VI, it is basically a bare bones text editor designed to be used without a mouse and focused more on manipulating text than creating it.  VI is on just about every platform you can think of and will be for the foreseeable future.

It is not my point today to bash VI.  VI is a great text editing tool that can make you a wiz at slinging lines and words around your files.  VI is the evolution of text editing because when you are using VI you are actually programming your text.

microsoft visual basic for msdos professional edition version1 00 thumb Why The IDE Has Failed Us

The problem with IDEs

Basically there are two problems with IDEs.  One I think is a valid complaint, the other appeals to engineers wanton desire to be simple, pure and take things apart.

The Bloat

The first problem with modern IDEs is the bloat.  The IDEs are big beasts that take up lots of memory, are fairly slow to load, and tend to have so many bells and whistles that they feel messy and clunky.

As I write this part of me thinks “what’s the big deal?”  I’ve got a pretty fast machine, it can run VS2010 pretty well.  But, there is something that doesn’t sit right for me and I am sure for other developers.  The IDE should feel clean, but it doesn’t.

I’m not sure if I can completely identify why IDEs have suddenly gone sour in our mouths.  Perhaps part of the bloat problem is that the IDE has kind of become a swiss army knife instead of a focused tool.

Strangely enough, I think part of the problem might be tools like Resharper, that are helping us a little too much.  The tool is so good that sometimes you wonder what life would be like without all those little hints and the IDE doing so many things for you.  Perhaps sometimes you feel like you are “cheating.”

The Imagined

Then there are the imagined problems with IDEs.  The ones that don’t really have any justification, but some of the “cool kids” like to talk about on their blogs.

If I can summarize what I gleaned from the argument here, I would say it basically is… IDEs that give me auto-complete, intellisense, and visual designers rot my brain.  To really program I should be able to write code without the help of an IDE.

I couldn’t agree more with that statement.

As a matter of fact, for that reason I don’t use electric toothbrushes, because it is not really brushing my teeth.

I also abhor graphing calculators; it’s not really calculus unless you are cranking it out by hand.

Email? Psha, everyone knows the pure way to communicate is by registered mail typed from a typewriter.

Oh, and don’t get me started on those GPS things.  You are not really navigating if you aren’t using a map and a compass, seriously.

Sorry for all the sarcasm, but I hope you get my point.

What is the solution then?

Is it to abandon the IDE and jump over to VI and notepad to edit our files so we don’t “rot our brain?”

I know that is the popular stance among the best and brightest right now, but sometimes the best and brightest are wrong.  Sometimes they are so bestest and so brightest that they can navigate with a map and a compass better than you or I can with a GPS.

I think the solution is to bring more VI-ness to IDEs.  The good thing about jumping to VI is that you can sling text around like nobody’s business.  The bad thing about jumping to VI is that you are forgoing some of the most useful productivity tools in dealing with APIs and libraries.

Why can’t we take the good VI-ness and put it into Visual Studio?  Looks like someone already has (ViEmu.)

The other part of the problem is the bloat.  Honestly, I think Eclipse deals with this fairly well, by making everything modular.  Unfortunately, some of the modules look plain ugly and don’t integrate well into Eclipse, but with Visual Studio you have to pull out a swiss army knife with 50 gadgets on it when you are just trying to eat some beans with a fork.

The answer is modularization and perhaps some competition for Visual Studio and some of the other IDEs that are a bit bloaty.  Perhaps we need an IDE that is built up from a VI or Emacs heritage?

I know for sure the solution is not to throw the baby out with the bathwater.  IDEs have made some truly amazing advancements that help bring the level of abstraction of software development to a much higher level.

Features like intellisense have made it easier then ever to hit the ground running with a new API like never before.

Automatic refactoring tools built into IDEs, and with add-ons like Resharper, have made refactoring code so much easier and so much more accessible.

Ctrl+Click goto definition and backwards / forwards navigation to jump to parts of the code greatly increase productivity.

I don’t need to go into all of the features of modern IDEs to make the point that there is value there and a large amount of it.

So before you abandon the IDE, consider strongly why exactly the IDE has failed us, and consider whether jumping to VI is really the best solution to the problem.  If you don’t know what problem you are trying to solve by jumping to VI, you might just be following the “cool kids” and drinking their “cool aid.”

Simple Branching Strategy Part 2: Implementation

In my previous post, I talked about the idea of having a simple branching strategy and why I prefer one where everyone works off the same branch.

In this post I will show you how to create what I believe is the most simple and effective branching strategy.

Take a look at this diagram of a sample project’s code lines:

simplebranch thumb Simple Branching Strategy Part 2: Implementation

Walking through it

The idea here is very simple.  Let’s walk through a development cycle together:

  1. Development starts.  Everyone works off of trunk.  Code is frequently checked into trunk, many developers checking in code 3-4 times a day, as they complete small quality sections of development.
  2. The continuous build server is continuously building and checking the quality of the code every single time code is checked in.  Any integration problems are immediately fixed.
  3. Enough features are done to create a release.  Trunk is tagged for release and a release 1 branch is created representing the currently release production code.
  4. Developers continue to work on trunk not being interrupted by the release.
  5. A customer finds a high priority issue in Release 1.
  6. A Rel 1 Hot Fix branch is created, branched off of Release 1 to fix the high priority issue.  It turns out that a good fix will take some time.  Team decides the best course of action is to apply a temporary fix for now.
  7. Rel 1 Hot Fix is done and merged back into Release 1 branch.  Release 1 is re-deployed to production.
  8. In the meantime another emergency problem shows up that must be fixed before the next release.  Rel 1 Hot Fix 2 branch is created.
  9. The bug fix for Rel 1 Hot Fix 2 is a good fix which we want in all future releases.  Rel 1 Hot Fix 2 branch is merged back to Release 1 branch, and merged back to trunk.  Release 1 is redeployed.
  10. In the meantime work has been going on on trunk, team is ready for Release 2.
  11. Release 2 branch is created…

Breaking it down

I gave a pretty detailed walk-through for a very simple set of actual steps.  But, I hope you can see how simple this process really is.

The basic idea here is that we are trying to decouple releases from development as much as possible.  The team is always going to keep chugging along, building new features and enhancing the code base.  When we decide we have enough features for a release, we simply branch off of trunk and create the release branch.

We can even do some testing on the release branch before we go to production if we need to without impacting future development.

The release branch code-lines never come back to trunk.  They don’t need to, they only exist so that we can have the exact production code and make modifications to it as hot-fixes if we need to.

We branch hot-fixes off of the release branch so that we can work on them independently, because not all hot-fixes go back to the main code-line.  We can make a hot-fix just for the current release, or we can merge it back to trunk to make it a permanent fix.

That is all there is to it.  This kind of branching strategy almost completely eliminates merges.  The only merge you ever do is small merges for hot-fixes.

Your branching strategy does not have to be complicated.  A simple strategy like this can fit almost any software development shop.

Frequently disputed points

Almost immediately when I introduce this simple system someone says:

What about half-completed features?  I don’t want to release half-completed features.  Using this strategy with everyone working off trunk, you will always have half-completed features.

So what?  How many times does a half-completed feature cause a potential problem in the system?  If the code is quality and incrementally developed, it should not impact the rest of the system.  If you are adding a new feature, usually the last thing you do is actually hook-up the UI to it.  It won’t hurt anything to have its back-end code released without any way to get to it.

Continuous integration, (especially running automated functional tests), trains you to always keep the system releasable with every commit of new code.  It really isn’t hard to do this, you just have to think about it a little bit.

If worse comes to worst and you have a half-finished feature that makes the code releasable, you can always pull out that code on the release branch.  (Although I would highly recommend that you try and find a way to build the feature incrementally instead.)

If you know you’re going to do something that will disrupt everything, like redesigning the UI, or drastically changing the architecture, then go ahead and create a separate branch for that work.  That should be a rare event though.

I need to be able to develop the features in isolation.  If everyone is working off of trunk, I can’t tell if what I did broke something or if it is someone else’s code.  I am impacted by someone else breaking things.

Good, that is some pain you should feel.  It hurts a lot less when you’re continuously integrating vs. working on something for a week, merging your feature and finding that everything is broken.

It is like eating a meal.  All the food is going to end up in the same place anyway.  Don’t worry about mixing your mashed potatoes with your applesauce.

If something someone else is doing is going to break your stuff, better to fail fast, then to fail later.  Let’s integrate as soon as possible and fix the issue rather than waiting until we both think we are done.

Besides that, it is good to learn to always check in clean code.  When you break other people and they shoot you with Nerf guns and make you wear a chicken head, you are taught to test your code locally before you check it in.

How to be successful

How can you be successful at this simple strategy?

  • Make sure you have a continuous integration server up and running and doing everything it should be doing.
  • When you work on code, find ways to break it up into small incremental steps of development which never break the system.  Hook up the UI last.
  • Always think that every time you check in code, it should be code you are comfortable to release.
  • Check in code at least once a day, preferably as soon as you make any incremental progress.
  • Test, test, test.  Test locally, unit test, test driven development, automated functional tests.  Have ways to be confident the system never moves backward in functionality.
  • So important I’ll say it twice.  Automated functional tests.  If you don’t know how to do this, read this.
  • Release frequently instead of hot-fixing.  If you never hot-fix you will never have to merge.  If you never have to merge, you will live a longer, less-stressed life.
  • Don’t go back and clean up code later.  Write it right the first time.  Check it in right the first time.

Hopefully that helps you to simplify your branching process.  Feel free to email me or post here if you have any questions, or are skeptical that this could work.

Powershell is Pretty Cool

I’m pretty behind on the Powershell thing.  I have to admit, I never really was that interested in using it.  But now that it is included in Windows 7, I feel like it is much more of a worthwhile investment since those skills are likely to be usable on any machine you’re on.

Dev machines woes

the woes poster2 thumb Powershell is Pretty Cool

I’ve been having lots of fun trying to build and setup my new dev machine for my new job.  I ended up working over the weekend on it, but it has been a pretty good learning experience.

I have learned many things from the experience, including:

  • Power supplies can make beeping noises.
  • Just because your computer beeps doesn’t mean it’s not working, check to see if there is video.
  • You must pull the processor securing lever all the way up before panicking, calling everyone you know and screaming, “MY PROCESSOR IS STUCK IN THE MOTHERBOARD, OH GOD HELP ME NOW!”
  • Installing Windows in SATA mode, then switching to ACHI mode will probably require a reinstall.
  • Installing Windows on a hd connected to a motherboard and processor, and switching to another motherboard and processor will probably require a reinstall.
  • IIS is not installed by default.
    • When IIS is installed, ASP.NET support is not installed by default in the IIS install.  (I always install IIS to serve up static content in 2010… yeah)
  • Drives raided together need to have their partitions recreated before a Windows 7 install will recognize the drive at all.
  • 4 monitor stand clamps don’t work to well on glass desktops for supporting four 24” monitors.
  • You can never have too many monitors.  It’s just not possible.

Making lemonade

Out of all the bad things that seemed to go wrong, I did learn a large amount of stuff, so overall I think it was worth it.

One of the really cool things I started learning is Powershell.  After setting up my dev environment for about the 4th time, I decided I would try to start to build something to benefit the rest of the team and new developers.

I started writing a Powershell script that would set up all the little tricky things that need to be done in order to get our development environment ready.  I am eating my own dog food, from my previous post.

One value I hold pretty highly is that if I have to do something manually more than 3 times, I need to figure out a way to automate it.  Powershell makes many automation tasks very possible.

I’ll include some pictures of my dev cave on my next post!

Zero Configuration Development Environments

I have been working on getting set up this week to develop for my new, awesome employer, TrackAbout.

In doing so, I have once again felt the pain of getting a development environment configured.  I forgot how painful it can be.  This is in no way a reflection of TrackAbout, the truth is most development environments are a pain to get setup.  Unless you’re actively trying to build a painless development environment, it is going to probably be the opposite.

I’ve seen a large number of development environments and I’ve built my share of them.  From all this, I have a pretty good idea of what I consider ideal, what we should strive for.

lg cocacola zero can thumb Zero Configuration Development Environments

The basic outline

  1. Install non-scriptable tools or start with a fresh image.  (Basically getting IDE and SQL Server installed locally)
  2. Get branch from version control.
  3. Build database
  4. Build code
  5. Local deploy

The idea here is that I should be able to either get an image that has my base tools installed, or install them myself, then pull down one source control location and everything else that happens from there is the result of build scripts or some other automated process.

I know, it is easier said than done.  Let’s break it down step by step and look at some of the possible solutions.

Install tools

If you are in an organization where everyone will have the same hardware, it is much easier to create an image of a developer machine with, say, Visual Studio and SQL Server installed.

Another possible solution is to create a dev VM that is maintained and updated regularly, so that it has all the required tools and you have a uniform structure.  I have tried this approach, and I find that the biggest problem is that many times you want to run native to get the performance improvements.  As hardware capabilities increase though, I am seeing this as a more viable route.

Finally, if you can’t get either of those situations, it is ideal to put all the tools that must be installed on a network share or some other easily accessible place.

Ideally, you want to keep the number of required tools down to an absolute minimum.  In most .NET environments this should be Visual Studio and SQL Server.  The other kinds of tools can be handled via dlls (usually).

Get branch from version control

Ideally, you should be able to point a person to one source control location, and that should get everything necessary for them to build and deploy the entire system locally.

If different applications your organization is developing have different branches, then you might need to check out one location per project, but even that can be automated to some degree with a “get latest” script or symbolic links.

Build the database

This one is kind of hard.  It requires quite a bit of forethought on how to get this working.  The idea here is that I should be able to build the entire database from a set of scripts.

The challenge is getting together a process which allows for the construction of the database from scratch and to populate tables that are required for the application, and be able to apply patches to existing databases.  I won’t go into how to do that here.

Build the code

There is quite a bit lumped into here.  From a developer perspective I should just be able to run one build command that is the same build that will be run on the continuous integration server and everything that I need should get built for me.

From behind the scenes, this is a difficult step.

  • You have to make sure everything works from relative paths or environment variables.
  • You have to have your scripts check to see if things are installed and install them if not (registry keys, etc).
  • You have to have all the libraries in a place that the build can find on the client machine.

The key to success here is to eliminate as much as possible and locate in one place, as much as possible, all configuration differences.

Local deploy

It should be very easy to do a local deployment of the application.  For .NET developers this usually isn’t a challenge, but in the Java world it can take some thinking on how to do this properly.

At anytime someone should be able to deploy locally to their machine.  Ideally, anyone should be able to take a build from the build server and deploy it with a single command.

It is all about the mindset

Basically, you have to think about zero configuration development environment from the beginning if you really want to be successful at it.  It is much harder to add it on later.

You do have to weigh the effort involved carefully though.  Most developers only set up their configuration once or twice. If you are going to have a growing team where you are constantly adding new developers, you should probably put considerable effort into getting as close to zero configuration as possible.  On the other hand, if you have a small team and don’t have new developers very often, it might not be worth the extra effort.  You have to find the balance.

In all honesty, my experience at my new job has been pretty good in contrast to some of the development environment setups I have seen.  There is a huge amount of consistency in configuration locations, which is good.

I’m looking forward to figuring out how to make it easier for the next guy though, once I understand everything better myself.

Developer Machine Considerations

I’m back from vacation.  And I’m actually starting a new job today.  I will be working remotely from home for a company called TrackAbout.

One of the first things I have been doing to get ready to start this new job is setting up my development workstation at home.  There are really quite a few considerations to think about when setting up your home office.

monstermachinebox Developer Machine Considerations


I was debating between going all out with the Intel 6 core chip or getting the nicely priced AMD 6 core.  I ended up choosing the AMD chip because the Intel chip was 3 times the cost, and the chipsets on the motherboards for the AMD chip are a little more stable since they have been around longer.

The processor doesn’t really matter that much anymore since processors are so fast nowadays.  What really matters though is the hard disk.  On this case I opted for a super fast SSD drive.  256GB should be plenty of space, with an additional hard drive just for back ups.  The speed improvement when using a really good SSD drive is amazing.  It is the single best upgrade you can do in a developer machine.

I also went with a large amount of RAM because I know that if you need to run a virtual machine, RAM is going to be a big deal there.  16GB should meet any need just fine.

Finally, I am trying out a quad-monitor setup for the first time.  I have been using dual 24” monitors for a long time now, but I have always thought two more would be even better.  It is really important to be able to quickly see multiple things going on at once without having to switch between applications.

I’m planning on setting up the monitors like so:

  1. IDE
  2. Reference: web pages, API, etc
  3. Communications: twitter, pidgin, email
  4. Secondary IDE for debug, or SQL server

Desktop or laptop

Most developers are getting laptops with docking stations these days and while I see the appeal, I like a good old tower and a cheap laptop instead.

  • With a desktop you can get beefier hardware for less $$$.
  • You can pick the hardware yourself (which is a big deal for me since I do a large amount of research on each component).
  • You can have more monitors natively (just buy a 4 port video card).
  • More upgrade options.

I still have a laptop, but it is a cheap light one.  The advantage here is that I can just remote desktop into my powerful machine and get all the benefits of both worlds.  If I am really ambitious I can even remote in with my iPad or phone.

So, while I can definitely understand the appeal for many to having a laptop that they can just disconnect and carry with them, I still prefer the desktop.


I almost did it this time.  I keep going back and forth on this one.  I really want to have my development machine be a VM so that I can just load it up and go, but after thinking about it more, I am not sure it is worth the cost.

I kept thinking about why I want to have a virtual machine for my development machine.  Really the answer comes down to me liking to have things in a nice separate little box.  Sure, I can drop my dev virtual machine on an external drive and take it with me, but I can achieve the same by remoting into it instead.

The thing that made me finally decide on no VM is the idea of optimizing for the rule rather than the exception.  The truth of the matter is when I am on my PC, I am going to be spending 90% of my time doing development work, and doing it on that one machine.

When I look at it that way, I can’t see a good reason to take the performance hit of virtualization for the 10% case.  I can achieve most of the “neatness” of virtualization by cloning my disk at a good configuration.

I’ll probably still have some sandbox VMs for testing out “crazy stuff”, but I think I am going to go native so I can really utilize the max out of my new hardware.

I’ll just have to treat my Windows install like I treat my source code.  “Leave it better than when I started.”

Anything I’m missing here or not seeing?