I often get asked by beginner programmers what programming language they should learn.
This, of course, is a tough question to answer. There are so many different programming languages today that a new developer, or even a seasoned developer, wishing to retool his or her career, could learn.
I’ve actually tried to answer this question before in a YouTube video, but I want to revise and refine my answer a bit here, because some of my views have changed and I’d like to give a bit more detail as well.
The wrong question to begin with
It turns out that what programming language you choose to learn is not actually all that important
Things have changed quite a bit from back when I first started my career in software development. Back when I first started out, there were much fewer choices of programming languages and there were much fewer resources available for reference. As a result, the choice was much more important.
For example, I started out learning C and then C++. At that time, it took quite a bit of work to master the language itself and to understand all of the standard libraries that were available. A good C or C++ programmer back then had a very in-depth understanding of every nook and cranny of the language and they needed this knowledge, because of two main reasons.
- References were not as widely available, so figuring out a syntax or library available involved flipping through a huge book, rather than just typing some keywords into Google.
- Programming, in general, was done at a much lower level. There were far fewer libraries available to be able to work at higher levels, so we spent more time working with the language itself and less time working with APIs.
Contrast that with the programming environment of today, where not only is information widely available and can be accessed with ease, but also there are a large number of programming languages that we effectively use to program at a much higher level due to the vast amount of libraries and reusable components available to us today.
In today’s programming environment, you tend to not need to dive as deeply into a language to be effective with it. Sure, you can still become an expert in a particular programming language, and it is good to have some amount of depth in at least one language, but you can literally learn a new language in less than a week and be effective with it almost immediately.
Now, before your alarm bells go off and you write me off as crazy, let me explain that last sentence in a bit more detail.
What do you mean you can learn a programming language in a week?
What I mean by this is that once you understand the basic programming constructs available in just about all programming languages, things like conditionals, loops and how to use variables and methods, you can take that same knowledge to a different programming language and just learn how to do those same things in that language’s syntax. In fact, most IDEs today will even help you with the syntax part, making your job even easier.
If you are already fluent in multiple programming languages, you probably agree with what I am saying, but if you have only ever learned one programming language or none at all and are looking to learn your first programming language, you might be a little skeptical. But, take it from someone who has learned and taught programming languages which I have learned in a week, the basics are pretty much the same.
Check out this book which basically deals with this exact subject, Seven Languages in Seven Weeks: A Pragmatic Guide to Learning Programming Languages.
Now, if you are just starting out, it is pretty unlikely you’ll be able to learn a whole programming language in a week. This brings us to the question, you may be asking yourself…
So, what programming language should I learn then?
Hold up. I’m still not quite going to answer that question. Because, it still isn’t quite the right question.
Instead of getting hung up on what programming language you want to learn, you should instead ponder what you want to do.
Learning by doing is the most effective way to learn, especially if you are doing something you have an interest in or is fun to you.
So, I always start new want-to-be developer out by asking them what they want to build.
Do you want to build an Android application? How about an iOS application? A web page? A game?
First, figure out the answer to this question and then let that answer guide you to choose the technology and programming language you will use to achieve that goal.
Don’t worry so much about which programming language or technology is most valuable. You can’t make a wrong decision and regret it later, because it won’t take you much time to retool later if you need to. Once you have the basics down and have actually used a programming language to build something, you’ll find that doing it again will be much easier.
I like to encourage new developers to write a mobile application—especially an Android application.
Here are some reasons why:
- A complete Android application can be built by a single person. Creating a complete application will really help you to feel confident about software development and is one of the best ways to really learn to code. I spent a good deal of my early career only being able to create bits and pieces of things, and it was frustrating, because I never knew if I could really “code.”
- By learning Android, you learn Java and how to use libraries and APIs. This will give you a good programming language to start with and you’ll get some valuable experience with APIs.
- Google gives you some free help and makes things pretty easy to learn. Since Google really wants you to create Android applications, they have put quite a bit of work into creating easy to use tools and tutorials to help you be successful quickly. (I’ve also created some tutorials, which you can watch at Pluralsight here as well.)
- You can actually make money while learning and teach yourself a very valuable and upcoming skillset. Not only can you sell your Android application or monetize it in some other way, but you will be learning a complete set of software development skills for a platform that is in very high demand.
Aha! So Java it is then?
No, not exactly.
Summing it up
I’m actually working on some products to help developers manage their careers and lives better which will cover topics like this one a bit more in-depth. If you are interested in receiving some updates when I publish an interesting article or video, or when I launch some of those products, feel free to sign up here. Don’t worry, I won’t spam you. J
My latest course was just released on Pluralsight:
If you haven’t seen Firefox OS yet, it is worth taking a look. I think this mobile operating system has a large amount of potential, because of its focus on HTML5 from the start. I was surprised how nice Firefox OS looks and feels and how easy it is to develop for using skills you already have.
Here is the official course description:
Firefox OS is a new exciting mobile operating system that has the goal of taking HTML 5 and making it a first class mobile citizen. In this course I’ll teach you the basics of Firefox OS and show you by example how to create two real Firefox OS applications as well as how to get those applications into the Firefox Marketplace.
In this course, we’ll start off by learning a little bit about Firefox OS and what makes it different. We’ll also talk about some of the benefits of a mobile operating system that uses web technologies you are already familiar with and take a quick look at the OS itself.
After that, we’ll create a very basic Hello World application as we go through the pretty simple process of installing the Firefox OS simulator and creating a very basic application to run inside it.
Once we have those basics down, we’ll go straight into creating our first application. First, I’ll show you how to create a hosted application which is an application that you can actually host on your own webserver, but is installed like any other mobile app on Firefox OS.
Then, we’ll create another full application, as we learn how to create a packaged application that is able to access more of the APIs that Firefox OS only exposed to apps that are actually installed on the device. We’ll still use the same HTML 5 technologies you are used to for this application, but we won’t need to host the app ourselves, because the user will install in directly on their device.
Finally, I’ll take you through the process of deploying your Firefox OS applications. We’ll talk about the different options you have for getting our application into users hands including distributing it yourself and using the Firefox Marketplace to host your apps.
It has been a while my first iOS course for Pluralsight. Several things have changed with iOS development using Objective-C. Perhaps, the biggest change is the use of storyboards instead of manually transitioning between views in iOS.
So, I created this new course, Beginning iOS 7 Development, to provide a fresh tutorial on getting started with iOS 7 development using Objective-C.
So, check it out if you are interested in iOS 7 development.
And here is the official course description:
Starting to learn iOS application development can be intimidating if you don’t have much experience with a Mac and haven’t used Objective-C. But, it doesn’t have to be a painful experience.
In this course, I’ll show you the basics of creating an iOS application as we build a complete iOS application learning what we need to know about Objective-C along the way.
We’ll start out by learning a little bit about iOS in general and the iOS development environment.
Then, we’ll jump right in and create out first Hello World iOS application as we set up our development environment and learn the basics of the IDE we use for iOS development, Xcode.
After that, I’ll show you the core things you need to know to build any application, how to build a user interface and interact with it. We’ll learn how to use Xcode’s Interface Builder tool to create a very basic IU and interact with it.
Once we’ve got the basics covered, we’ll start building our first real application. We’ll learn a few new concepts as we build our application, like how to use the iOS storyboarding feature to creating a multi-screen application and how to setup navigation in our application.
Finally, we’ll finish up our iOS application by learning how to add user settings to the app and how to show the user a simple notification through the use of alerts.
So, if you are looking to get started with iOS development and are looking for an easy and gentle way to get introduced to the environment and tools while building a real application, this course is exactly what you need. You won’t be an expert at iOS development after taking this course, but this course will definitely give you a head start in learning the platform and show you the basics you need to know before taking a more advanced course.
Quick side note: if you are stopping here reading this post, you are probably the kind of developer that cares about your career. I’m putting together a complete package full of information on how to really boost you software development career and increase your earning potential. It is only available for preorder right now. I’m only going to be selling it at this heavily discounted price for a short time. I’ll also have some exclusive interview with famous software developers like Bob Martin, Jon Skeet, and Jeff Atwood sharing their secrets to their success.
We are in the middle of a period of rapid change in the software development world and I think the pace of change will only continue to speed up in the next few years. So, I thought it would be a good idea to post some software development predictions of what I think will happen in our software development world in the near future.
Mobile Software Development Prediction
Mobile is perhaps the biggest areas that everyone is curious about and it is really starting to seem that this space is ripe for an overhaul. None of the major players are innovating in this space at all.
Apple keeps releasing new versions of its mobile OS and new iPhones, but they aren’t really doing anything new or different. I also think moving to the flat design is going to turn out to be a bad move, because in doing so they took something that looked flashy and sophisticated and made it look extremely simple and plain.
On the Android side, there also isn’t much innovation. The OS itself seems pretty stagnant and the devices aren’t really adding any new exciting capabilities.
I think consumers and developers alike are getting a bit tired of the App Store / install apps model. It seems really burdensome and the more apps you get, the more updates you need to do and the more difficult it is to keep track of them all.
I think consumers are also finding that they don’t really like having so many devices to have to plug in, sync data with and carry around. I’m starting to feel the weight myself of having a fitbit, a phone, a tablet, a notebook, and a desktop. (Those are all links to the hardware I have, if you are curious.) I’d really like to have a single device that can adapt to a couple of different form factors and do everything I need.
My prediction is that we are going to start seeing the devices and OSes merge both on the Apple and Android side. I fully expect to start seeing devices from both Google and Apple that are tablet / notebooks which are similar to the Windows 8 notebooks we are seeing today. There is just no reason to have a tablet and a laptop when they can be easily combined into a single device with a detachable keyboard.
I am not sure this will happen, but I’d actually like to see it taken one step further and see the idea Ubuntu Mobile is pioneering take over. With Ubuntu Mobile, the idea is that your phone has enough processing power to be your laptop or desktop computer, so you can just carry around your phone and plug it into a monitor and keyboard. The actual OS adapts to the form factor depending on whether it is plugged in or not.
I’d actually like to see things go one step further than that (and I imagine they eventually will), and see a “computer” reduced to a very small block that you carry around with you which is able to display itself on any display technology that is around and utilize any external devices. I imagine, the phone form factor would still be the best way to use this device for now, since you probably want to have a portable display anyway, but I’d like to just be able to pull out this device and tell it that I want the output to go to my TV or my monitor and for it to remember that it should utilize my keyboard and mouse for that profile. At that point, I might have a tablet device as well, but it would only be a dumb screen that receives a display signal wirelessly from my phone.
I am getting a little bit far into the future though. But, I think realistically next year we should see a merging of OSes from both Google and Apple and I would really expect Google to take the lead here, because I can’t think of one single competitive advantage that Apple has retained at this point. There is no “magic” anymore with Apple products, the fairytale is over.
Web Software Development Predictions
I don’t know if Meteor itself will become very popular, but the ideas within the platform are very likely to spill over to the rest of the web and become the defacto way of doing things.
Meteor also gets rid of REST and replaces it with data synchronization that just seems to magically work. This is extremely powerful, because I am finding there is a huge waste of time involved in creating and consuming REST services ultimately to just sync data between two systems.
Right now the browser is an OS, but I think within the next year or so, we’ll see the PAGE or tab become the OS. And that is essentially what technologies like Meteor are doing. They are deploying their infrastructure code to the first page load and then running the app in their own environment on a tab in your browser.
I think we’ll see a much bigger adoption of Google’s Go language, because it seems to be a very good general purpose language that has many of the performance characteristics and power of C and C++, but has the ease of use and leverage of Java and C#.
We’ve sort of reached the place in the C# and Java space where just about everyone is doing “cargo cult programming.” What I mean by this is that a majority of developers are writing unit tests and using IoC containers without understanding what value those practices bring or even if those practices are actually bringing any value.
This complexity and confusion is preventing many new developers from learning C# and Java and it is starting to seem that Python is now the standard beginners language.
I expect that we’ll see a rise in both Python and Go. Google has the potential to greatly increase the popularity of either of these languages if they decide to move Android / Chrome’s primary language from Java to either Go and Python. (My bet would be on Go.)
We are overdue for a new statically typed language that incorporates all the things we’ve learned about in the past decade from Java and C#, but I don’t see that language on the horizon, so I am wondering if statically typed languages will eventually fade away. The increase in tooling and support for dynamic languages has for the first time in history made this a possibility and I am actually starting to wonder if I might be ready to turn over to the dynamic side after all these years of being a staunch defender of statically typed languages. (To be honest, the lines between what is statically types or strongly typed and not is blurring anyway.)
Only time will tell
Well, those are my software development predictions, and to some degree my hopes, but I’m not very confident in many of them at this point. The only one I’d bet on is that both Google and Apple merge their mobile and desktop OSes and develop ways to build web / native hybrid apps. That is the only direction that makes sense to me.
What do you think? How far off am I? What have I left out?
A bit late getting this out, but I published a new course for Pluralsight.
This course was really fun to create. I got to use several of my favorite technologies.
Here is the course description:
It can be very difficult to build a cross platform application that will work on the web as well as popular mobile platforms like Android and iOS.
In this course, I’ll take you through the complete process of creating an application that works on each of the platforms and uses a REST based backend API to share data and business logic—all using C#.
We’ll start off this course by learning how to build a REST based API using the popular open source framework ServiceStack. I’ll show you how easy it is to get ServiceStack set up and even how to store data for the API using a Redis database.
Next, I’ll show you how to create an ASP.NET MVC 4 application that uses the REST service we built to display it’s data and implement it’s logic. We’ll learn how to use JQuery to make AJAX calls to a REST based API from within our MVC 4 application.
Then, we’ll learn how we can use C# and the .NET framework to build an Android application using the Xamarin tools. We’ll use the same REST API, we created earlier and build a real native Android application that is able to consume that API for implementing its logic and displaying data.
Finally, I’ll show you how to do the same thing for an iOS application. We’ll again use C# to build a real native iOS application with the Xamarin tools and learn how to consume REST based web services from iOS.
So, if you are a C# developer and don’t want to have to learn several other programming languages to build cross platform applications; you’ll definitely want to check out this course. By the end of this course, you’ll have the skills you need to be able to implement an end-to-end cross platform solution complete with a REST based API backend all in C#.
I’m in the unique position of having developed with almost all of the major cross platform mobile development solutions.
I’ve published courses for Pluralsight on:
- MonoTouch (Xamarin.iOS)
- Mono for Android (Xamarin.Android)
- Appcelerator Titanium
- Native Android Development
- Native iOS Development
- MonoGame (Cross platform game development)
After working with all these different solutions and investigating others, I thought I would publish my thoughts on each of these choices and the differences between them.
I’m mostly going to focus on Android and iOS because even though there are other competitors, those are the only major players that exist at present. Everyone else has a relatively tiny market share.
The most obvious way to build mobile applications is to use the native tools that come with the platform.
For Android, it is Java and either Eclipse or the new Android Studio, along with the Android SDK.
For iOS, it is Objective-C and XCode.
For Windows Phone it would be C# and Visual Studio.
I built my first mobile applications for iOS and Android natively. I started out with an Android version of my application and then ported over most of the code and design to iOS.
This was a fairly difficult process and I did not have the ability to share any code. I had to learn both platforms along with their SDKs and I had to learn Objective-C, because I didn’t know Objective-C or anything really about Mac development before I started writing my first iOS application.
In general, I wouldn’t recommend this approach because you are going to waste a large amount of time maintaining two completely separate code bases and you really don’t gain much by using the native tools.
However, I would recommend anyone seriously thinking about cross platform mobile development to at least develop a simple app natively in both Android and iOS. The reason for doing this is because it will make it easier for you to understand what is going on under the abstraction layer that a cross platform mobile development solution will provide you and it will help you to see the value or lack of value in a cross platform solution.
The Xamarin tools basically allow you to develop an Android or iOS application with C# and share a good amount of the code.
When you write an application using the Xamarin tools you are basically using an abstraction on top of the real SDKs for iOS and Android.
What this means is that you will end up with a fully native application with a fully native user interface on each platform.
This also means that you will be limited to some degree in the amount of code you can share between the platforms.
Typically when I develop an application using the Xamarin tools, I will build a core of the application that will be shared code and have the iOS, Android, and even Windows Phone versions of the application depend on this core library.
With this approach you may be able to reuse somewhere around 60-70% of your code without even trying very hard.
But you can take things further and either develop your own abstractions using an architecture like MVC or MVVM to make it so the only code you are not reusing is just the actual views themselves, or you can use a framework that does this for you like MVVM cross. This approach is, of course, a little more difficult to get started with but can provide a much higher percentage of code reuse, perhaps around 80-90%.
As for the tooling, the Xamarin tools are awesome!
Xamarin has its own IDE called Xamarin Studio. This IDE is cross platform and is very well designed and easy to use.
The Xamarin tools also have a plugin for Visual Studio which allows you to develop your application in Visual Studio. You can even develop an iOS application from Visual Studio, but you still need a Mac to perform the build. (The tool uses a remote call to the Mac to perform the build.)
Xamarin also recently introduced a component store which makes it easy to find reusable components directly from Xamarin Studio and plug them into your application.
- Mobile Development with C#: Building Native iOS, Android, and Windows Phone Applications (by a guy I highly respect, Greg Shackles)
- Developing C# Apps for iPhone and iPad using MonoTouch: iOS Apps Development for .NET Developers (by another Xamarin genius, Bryan Costanich)
PhoneGap is probably the next most well known cross platform mobile development solution, but it is also somewhat confusing.
What does all this mean?
Well, it means that if you are developing a PhoneGap application, you can develop it just like a cross platform mobile web site. You can use any mobile framework you like, for example Sencha Touch, or JQuery Mobile, etc.
Because your PhoneGap application will be running in a browser it will be more like a web application than a native application. The user interface you design will not use the native controls and will be subject to the limits and speed of a web browser.
This also means that you might have to write some platform specific code to make up for differences between the browsers, but you can basically assume that you will be able to share most of the code.
The tooling for PhoneGap depends entirely on the environment you want to build the app with. You can develop in whatever environment you would like and basically use a plugin for the IDE in most cases. There are quite a few manual steps, so getting setup is not that easy.
One big benefit to PhoneGap though is PhoneGap build, which allows you to upload your project in whatever environment you created it in, but build it automatically for the other platforms.
I want to mention this platform next because many developers confuse this with PhoneGap.
With Appcelerator Titanium you use a cross platform mobile development custom API to build your application. This is different than PhoneGap or Xamarin, because with Xamarin you use a wrapper around the real native SDKs and with PhoneGap you use whatever you want to build an HTML5 web application.
With Titanium you actually write all your code against their SDK which includes UI components as well. So this means that when you write a Titanium application you actually can write a cross platform user interface.
Appcelerator Titanium apps are actually compiled down to completely native applications that use the real native controls for the platform.
For example, in Titanium you can programmatically declare a button and specify its layout and some attributes about that button. When you compile your application, the button will appear as a real native Android button on Android and a real native iOS button on iOS.
Maybe, but highly unlikely. Many of the UI elements and interaction paradigms are cross platform, but parts are not. For example, in iOS you have the idea of a Navigation Controller which keeps track of the history of what screens you navigated through and lets you go back; Android doesn’t have such a control. But, Titanium does have support for platform specific controls, it just means that you have to make some of your code conditional based on the platform.
All this is to say that you can program to the lowest common denominator and get a fully cross platform application with close to 100% code reuse, but even though you’ll have native control, the result might not look that great.
The reality is, if you are using Titanium, you’ll probably want to tailor some parts of the application to the specific platforms.
Titanium recently introduced an MVC framework called Alloy, which greatly simplifies creating Titanium applications and takes out the tedium of programmatically creating all the user interfaces. With this framework, you declare your user interface using an XML markup, which is pretty straight forward. You then use controller classes to populate and interact with the UI. It also has the concept of style sheets which are very similar to CSS.
One of the most impressive things about Titanium though, is its cloud offering. Titanium basically lets you have access to their complete backend of cloud services which allow you to easily create what can be best described as Facebook-like functionality without having to code your own backend. You can use the cloud services to manage users, authenticate them, store data about the users, like social graphs and even just store key value pairs. I was really impressed by this functionality.
Appcelerator Titanium: Patterns and Best Practice (Boydlee Pollentine, Trevor Ward)
More cross platform mobile development options?
There are obviously many more options out there, but I picked these three for standard application development because from my experience these are the most serious widely used offerings.
These 3 offerings also encompass just about all the ways to do cross platform mobile development:
- Shared code, but separate and native UI (Xamarin)
- HTML5 App running locally (PhoneGap)
- Fully shared code native app (Titanium)
There are obviously trade-offs to each of these approaches and nothing is quite perfect, but I do consider all of these good solutions at this point.
In general, I prefer the Xamarin approach because I like having control over the native user interface completely and I like being able to develop in C#.
If I were to develop in PhoneGap today though, I’d most likely use Icenium, which is basically an IDE and set of build and testing tools built around Cordova (the open source part of PhoneGap,) that makes it much easier to develop in and deploy.
Don’t forget to check out my Pluralsight courses if you want to learn how to get up and running quickly with some of these mobile development frameworks.
Quick side note: if you are stopping here looking for some cross platform mobile development solutions, you are probably the kind of developer that cares about your career. I’m putting together a complete package full of information on how to really boost you software development career and increase your earning potential. It is only available for preorder right now. I’m only going to be selling it at this heavily discounted price for a short time. I’ll also have some exclusive interview with famous software developers like Bob Martin, Jon Skeet, and Jeff Atwood sharing their secrets to their success.
Personal privacy is over.
The world knows more about you than you do and soon it will know even more.
We can keep fighting the battle to secure our privacy or we can learn how we need to live in the coming age where all of our actions are potentially in public view.
Before you get worried that I am going all political on you, don’t worry, I’m not condoning the invasion of personal privacy or the eradication of it, but at the same time I’m not supporting it either.
I’m simply looking at the patterns that are emerging as the technology of our society increases, making this transformation inevitable.
I’m actually staunchly opposed to expressing my political thoughts publicly from any forum, even from my own blog, and I often recommend others do the same, mostly because of this transformation that is taking place. I’ll talk more about that a little later on, but first let me tell you why privacy is indeed dead or at the very least dying.
It’s more than Google Glass
Google Glass itself may not change the concept of personal privacy, but wearable computing is certainly the future.
What Google Glass does do, is sets up the stage for wearable computing to go mainstream.
I hear many people complaining about Google Glass saying that they would never wear the technology in public, because they would look like a geek or a dork.
But, this is exactly what makes innovations like Glass so important. The first thing that changes in society is perceptions.
Google Glass may not have the momentum to bash through the barrier of the “dorkiness” of wearing a computer on your head, but it will make a dent in that wall and over time that wall will come crashing down.
When everyone is wearing cameras it will be impossible to control them
We’ve already been through this before.
We’ve been through it at least two times that I can think of.
First, there was the USB drive.
All of a sudden we could plug a small device into just about any computer and pull data off of it or put data on it.
Computers without disk drives or network access were no longer inaccessible.
The data on them was no longer securable.
Sure, many companies tried to make it so PCs in their buildings were secured, but most companies realized that they would just have to deal with this new reality. You had to realized that if a person had access to data on a computer, they could take that data home if they wanted to.
The same type of thing happened when all cell phones started getting quality cameras on them.
Secure facilities tried to prevent people from taking pictures by disallowing employees to bring in cell phones that had cameras, but that didn’t last long.
It became apparent pretty quickly that it would be just about impossible to prevent human beings who now were part human, part iOS or Android, from bringing their cell phone into a building. And, even if you could, someone who really wanted to take a picture would find another way. The technology that brought tiny cameras to the backs of every cell phone also pioneered the mass creation of tiny cameras that could easily be hidden anywhere.
Already governments and family matters have been thrust into public view
We currently live in a society where it is just about impossible for public figures or government to do an action in public and try to keep it secret.
Think about all the times in the last few years when a police officer stepped over the line or a government tried to quell a public protest by using violence or excessive force.
Each and every time, not only was there photographs and videos of the incidents, but the information spread to the entire world almost instantly through mediums like Twitter and Facebook.
The fact of the matter is, it is almost impossible to hide or cover up anything that occurs in public view.
Consider the amazing and tragic events that recently occurred at the Boston Marathon. In a huge crown of people, it only took about a day to identify two suspects out of the thousands of people in the crowd using images and video provided by the public.
This event was significant, because it showed that even when people aren’t trying to capture data about you in public, they are capturing data.
You don’t have to worry about the government tracking you. The government doesn’t have the power or capability to do it. Instead, you are tracking yourself, you are tracking the government.
Oh, and if you still think family matters are private, you probably haven’t heard of this thing called Facebook.
Don’t think you can get a divorce, or even get into a heated argument with your teenager and not have it become public knowledge on Facebook.
The problem is that there are too many leaks now to private family matters and it is too easy for those leaks to spread the information.
Worse yet, those naked pictures you took of yourself, or unfortunately your less worldly-wise teenager took of themselves, that you thought no one would see… all I can say is “good luck with that.”
The reality is the trend against privacy will not reverse
Not only are you willingly providing your own private information, but companies are actively mining it.
From tracking cookies, to search histories, to friends graphs, advertising link clicks, foursquare check-ins, what you post on Twitter and more, everything is being recorded and tracked.
Each piece of information by itself is not very valuable, but when you combine all this information together, a very real and precise silhouette begins to emerge from that frosted glass window you are hiding behind.
And, like I said before, even if Google Glass is not successful, it will not be long before every human walking around in public and in private wearing a camera that not only you can’t control, but likely you won’t even be able to see.
Not only will we all eventually be wearing cameras, but the data from those cameras will be instantly uploaded to the cloud, where it will become a permanent record.
There are several unstoppable factors that will continue to improve over time, which will fuel this trend:
- Miniaturization of technology
- Increase in efficiency of batteries
- Increase in processing power
- Prevalence of social networks
As technology gets smaller and is able to be more mobile, and the ability to crunch big data and distribute data through social networks increases, privacy will greatly diminish.
The reality is we are transitioning from a society where private is default and public is a choice, to one where public is default and making things private requires considerable effort and in some cases is impossible.
How to live in this new “public by default” world
Rather than uselessly spending our energies trying to fight the trend of the world, it is better to spend those energies tempering our behavior to fit the world we are living in and will soon be born into.
It starts with getting in the habit of assuming everything we put on the internet will not only be made public, but will likely be a permanent record that is easily searchable.
Many times throughout the day I watch my Twitter or Facebook feed fly by and I cringe at all the people who are posting their own private opinions that should not be made public.
I’m not saying you can’t have an opinion, but I am saying that you have to be wary that the opinions you express will become visible by more than just the people you share them with and will become a permanent record which you cannot be alienated from.
Are you a republican? Are you a democrat? Perhaps you are a libertarian or a communist? Do you support gun control? Are you pro-choice?
Good! Stand for something, but there isn’t much of a need to say it out loud. Let your actions stand for you, they are more powerful than words anyway.
The problem with tweeting about your political views is that they may change. You might be leaning one political direction today, but 5 years from now, you have changed your mind completely. But, guess what won’t ever change? That’s right, the permanent record of what you said on the internet. If you ever decide to run for office or even go on a date, those words you said may come back to haunt you.
And even if your mind doesn’t change, do you really want to decisively set half of the world population directly against you? I can’t image many scenarios where this is a good idea. Even in politics, bipartisan political figures are usually much more successful. (Ok, I made that up, but I think it’s true.)
I’m getting a little bit away from the point, but what I am trying to say is that you need to strongly consider things before you put them on the internet, because what you say online is public and permanent.
It goes beyond just the internet of course, even though it may start there.
We have to prepare ourselves for a world in which the moment we step out our door our actions are recorded.
In this kind of world, we have to be careful not to try to hide things that are likely to be destructive to us if revealed, because we know that what is hidden is likely to be revealed.
It means being more upfront and honest in our dealings and being careful to manage our image to make the focus be on the images and presentation that we desire while diminishing the focus on the image we do not desire.
When you don’t have control over the information itself, you have to instead focus on controlling how that information is presented.
It all comes down to living in such a way that clearly reflects the values you are trying to represent and knowing that perception is 100 times more important than reality.
There is no time like the present
If you are still living like you can hide your tracks and like your track record doesn’t matter, you’d better stop now, before it’s too late!
The answer isn’t to try to find better hiding places for your secrets and to try to wall yourself away from the world, but instead of accept the reality of the situation and find the best way to adapt to it.
“Man is not what he thinks he is, he is what he hides.”
― André Malraux
I really dislike using a keyboard and a mouse to interact with a computer.
Using a mouse is a more universal skill—once you learn to use a mouse, you can use any mouse. But, keyboards are often very different and it can be frustrating to try and use a different keyboard.
When I switch between my laptop and my desktop keyboard, it is a jarring experience. I feel like I am learning to type all over again. (Of course I never really learned to type, but that is besides the point—My three finger typing style seems to work for me.)
When I switch to a laptop, I also have to contend with using a touchpad instead of a mouse, most of the time. Sure, you can plug in a mouse, but it isn’t very convenient and you can’t do that everywhere.
I also find that no matter how awesome I get at keyboard shortcuts, I still have to pick up that mouse or use the touchpad. Switching between the two interfaces makes it seem like computers were designed for 3 armed beings, not humans.
Even when I look at a laptop, it is clear that half of the entire design is dedicated to the keyboard and touchpad—that is a large amount of wasted space.
I’m not going to say touch is the answer
You may think I am going in the direction of suggesting that tablets solve all our problems by giving us a touch interface, but that is not correct.
Touch is pretty awesome. I use my iPad much more than I ever thought I would. Not having the burden of the keyboard and mouse or touchpad is great.
But, when I go to do some text entry on my tablet or my phone, things break down quite a bit.
On-screen keyboards are pretty decent, but they end up taking up half of the screen and the lack of tactile feedback makes it difficult to type without looking directly at the keyboard itself. Some people are able to rely on autocorrect and just let their fingers fly, but somehow that seems dirty and wrong to me, as if I am training bad habits into my fingers.
Touch itself is not a great interface for interacting with computers. Computer visual surfaces are flat and lack texture, so there is no advantage to using our touch sensation on them. We also have big fingers compared to screen resolution technology, so precision is also thrown out the window when we relegate ourselves to touch interfaces.
It is completely silly that touch technology actually blocks us from viewing the part of the screen we want to touch. If we had greaseless pointy transparent digits, perhaps touch would make the most sense.
Why did everything move to touch then? What is the big thing that touch does for us?
It is pretty simple, the only real value of touch is to eliminate the use of a mouse or touch pad and a keyboard.
I wasn’t either, till I thought about it a bit more.
But, consider this… If you were given the option of either having a touch interface for your tablet, or keeping the mouse-like interface, but you could control the mouse cursor with your mind, which would you prefer?
And that is exactly why touch is not the future, it is a solution to a specific problem, the mouse.
The real future
The good news is there are many entrepreneurs and inventors that agree with me and they are currently building new and better ways for us to interact with computers.
This technology has some great potential. As the camera technology in hardware devices improve along with their processing power, the possibility of tracking eye movement to essentially replace a mouse is becoming more and more real.
There are two companies that I know are pioneering this technology and they have some pretty impressive demos.
TheEyeTribe as an “EyeDock” that allows for controlling a tablet with just your eyes.
They have a pretty impressive Windows 8 tablet demo which shows some precise cursor control using just your eyes.
Tobii is another company that is developing some pretty cool eye tracking technology. They seem to be more focused on the disability market right now, but you can actually buy one of their devices on Amazon.
The video demo for PCEye freaks me the hell out though. I don’t recommend watching it before bed.
But Tobii also has a consumer device that appears to be coming out pretty soon, the Tobii REX.
Subvocal recognition (SVR)
This technology is based on detecting the internal speech that you are generating in your mind right now as you are reading these words.
The basic idea is that when you subvocalize, you actually send electrical signals that can be picked up and interpreted. Using speech recognition, this would allow a person to control a computer just by thinking the words. This would be a great way to do text entry to replace a keyboard, on screen or off, when this technology improves.
NASA has been working on technology related to this idea.
You’ve probably already heard of the Kinect, unless you are living under a rock. And while that technology is pretty amazing, it isn’t exactly the best tool for controlling a PC.
But, there are several other new technologies based off gesture control that seem promising.
There are two basic ways of doing gesture control. One is using cameras to figure out exactly where a person is and track their movements. The other is to use accelerometers to detect when a user is moving a device, (an example would be the Wii remote for Nintendo’s Wii.)
A company called Leap, is very close to releasing a consumer targeted product called Leap Motion that they are pricing at only $79. They already have plans to sell this in Best Buy stores and it looks very promising.
Another awesome technology that I already pre-ordered, because I always wanted an excuse to wear bracers, is the MYO, a gesture controlled armband that works by a combination of accelerometers and sensing electrical impulses in your arm.
What is cool about the MYO is that you don’t have to be right in front of the PC and it can detect gestures like a finger snap. Plus, like I said, it is a pretty sweet looking arm band—Conan meets Bladerunner!
Obviously video based gesture controls won’t work well for mobile devices, but wearable devices like the MYO that use accelerometers and electrical impulse could be used anywhere. You could control your phone, while it is in your pocket.
Augmented reality and heads up displays
One burden of modern computing that I haven’t mentioned so far is the need to carry around a physical display.
A user interface is a two-way street, the computer communicates to the user and the user communicates to the computer.
Steve Mann developed a technology called EyeTap all the way back in 1981. The EyeTap was basically a wearable computer that projected a computer generated image on top of what you were viewing onto your eye.
Lately, Google Glass has been getting all the attention in this area, as Google is pretty close to releasing their augmented reality eyewear that will let a user record video, see augmented reality, and access the internet, using voice commands.
Another company, you may not have heard of is Vuzix, and they have a product that is pretty close to release as well, Smart Glasses M100.
Brain-computer Interface (BCI)
There are a few companies that are putting together technology to do just that.
I actually bought a device called the MindWave from NeuroSky, and while it is pretty impressive, it is still more of a toy than a serious way to control a computer. It basically is able to detect different brain wave patterns. It can detect concentration or relaxation. You can imagine, this doesn’t give you a huge amount of control, but it is still pretty fascinating.
I haven’t tried the EPOC neuroheadset yet, but it has even more promise. It has 14 sensors, which is a bit more intrusive, but it supposedly can detect your thoughts regarding 12 different movement directions, emotions, facial expressions, and head rotation.
So where are we headed?
It is hard to say exactly what technology will win out in the end.
I think we are likely to see aspects of all these technologies eventually combined, to the point where they are so ubiquitous with computer interaction that we forget they even exist.
I can easily imagine a future where we don’t need screens, because we have glasses or implants that directly project images on our retinas or directly interface with the imaging system in our brains.
I easily see us controlling computers by speech, thought, eye movement and gesture seamlessly as we transition from different devices and environments.
There is no reason why eye tracking technology couldn’t detect where our focus is and we could interact with the object of our focus by thinking, saying a command or making a gesture.
What I am sure of though is that the tablet and phone technology of today and the use of touch interfaces is not the future. It is a great transition step to get us away from the millstone around our necks that is the keyboard and mouse, but it is far from the optimal solution. Exciting times are ahead indeed.
Sophia got her first introduction to the iPad at about 3 months old.
As soon as she could sit in a rocker chair my wife and I let her start playing on the iPad.
We started off with just one game, Interactive Alphabet by Piikea. It is basically a game that goes through the Alphabet and lets the baby interact with some of the pictures.
We added a few more ABC type of games as she got a bit older, but we mainly just let her play with that one game, because we figured it would be great to let her start seeing letters and learning the alphabet as early as possible.
Right from the get-go she would swat at the screen. She didn’t immediately understand the cause and effect, but she quickly grasped the idea that when she hit the screen, something would happen.
After a while she became pretty good at being able to do the simple things in the ABC game. She would still swat the screen, but purposefully swat certain areas in order to do something like build a sandcastle.
Around 12 months, we started adding a bunch more apps. We added some interactive books and a couple of simple games.
Sophia was learning how to do many more things in the apps. She could point with a couple of fingers and very purposefully touch certain areas of the screen.
She really didn’t have any concept of touching and dragging though, and would often run into problems of having one hand leaning on the iPad which was causing the other hand’s touches not to register.
She’s now 18 months and she is an iPad master.
Sophia can now:
- Turn on the iPad
- Unlock the iPad
- Pick which app she wants to play out of her folders
- Use the home button to exit an app
- Double press the home button to switch to a recent app
- Navigate through menus in apps and get back to the app
- Use the table of contents in books to pick the page she wants
She also asks for the iPad by name. She has about 40 apps on the iPad that she subsumed from my wife. It seems like she is learning something new every day now.
The world is changing
Our children, especially the youngest ones, are growing up in an entirely different world than has existed ever before.
I know this has been said many times before and it could be argued that my generation also grew up in an entirely different world than my parents, but I think the change we are seeing now is much more substantial.
I predict that this generation will be known as the tablet generation. With Windows 8 now released we are going to see a rapid decline of non-touch devices. In a few years all laptops will be touch screen retina displays.
There are some fundamental changes going on in how we interact with computers and even what defines a computer.
Yes, I know you’ve heard all this before, but why is this important?
It is important because the real shift I see is the shift between a primarily analog focused world view to a primarily digital focused world view.
For me the iPad or the computer is an attempt to replicate some process or experience in the real world. No matter how long I work with computers or use these devices, I cannot escape my world view. Analog always comes first.
For our children things are different.
I can’t say for sure that picking up a pencil and being able to write is a skill that will even be necessary.
It is very likely that this coming generation will view things through the digital lens first and the analog world will be secondary.
I don’t mean they’ll be jacked into computer all day and live in a virtual world, but I do think that while we try to relate software to tangible things the coming generation is likely to view software as the primary and tangible objects as secondary.
Think about music. Ever had an 8Track? How about a cassette tape? CD anyone?
How do we think of music today? One word comes to mind—MP3.
What started out as a physical record eventually lost its purpose and is now so heavily digital that we tend to think in terms of the digital and don’t even consider the tangible anymore.
The same thing is currently happening with books, movies and to some degree money.
Why we let Sophia be an iKid
With the changing world, computer literacy is more important than ever before.
Even in the world we live in now, it is just about impossible to get any kind of non-labor intensive job without being able to use a computer.
If computer literacy is arguably going to be the most important skill for anyone to have in the future, why not start as young as they start to show an interest?
I think it is a huge asset to develop in our children the ability to use a computer as easily and mindlessly as the ability to eat with a fork and a spoon.
I wish I had that ability. I could be so much more efficient if I would stop writing down lists on pieces of paper and instead pull up my iPad or other tablet to jot down ideas and completely replace paper in my life.
And sure I could learn to wean myself off of the analog world, but I want my daughter to be able to think first in the digital world. She’ll be way more efficient and see things from a better perspective than I ever will.
Aside from that, my wife and I find that the iPad is an excellent learning tool to help Sophia learn to learn.
There are so many things she is able to teach herself using that iPad.
- Has a vocabulary of over 100 words
- Can count to 4 in order and count actual objects
- Can say most of her ABCs
- Can recognize most letters
- Can name many animals and objects
Much of what she knows she learned at her own pace based on what she was interested in playing on the iPad.
For example, one week she’ll be playing many of the numbers apps. For a whole month she just wanted to do alphabets.
The iPad gives her the freedom to be able to choose what she wants to learn and to do it effortlessly. She is developing the skills to be able to self-educate. Sure, we still read books to her and try to teach her, but she seems to get a large amount of her knowledge from what she learns playing on the iPad. (At least the reinforcement of what she has learned.)
Overall I don’t think there is any reason to stop her from playing on the iPad. I know some people equate it to TV, but I think it is fundamentally different. The apps she plays on the iPad are interactive. You can’t mindlessly sit and watch the iPad. Instead, there is a constant feedback loop that is not present with TV.
Also we can carefully monitor the apps she uses. The TV is an open system that brings unknown content into your house, where the iPad can be used as more of a closed system.
To summarize, I think we are preparing her for the future and giving her a huge head start in life.
How to get started
So you may be wondering how to best go about getting your baby or toddler started with the iPad.
While I’m not a child development expert, I can give you some advice from what my wife and I have learned in this process.
You can of course get a newer iPad or even another tablet, or the iPad mini, but just be aware of two things.
- Babies don’t have very precise coordination with their hands so small screen are going to be hard for them to use.
- Babies tend to throw things, especially when they get frustrated.
The next thing you need is apps. My wife, Heather, wrote up this section for me. So, if you notice the grammar is perfect and is written with a much higher skill level than my usual writing, that is why.
(Please let me know if you have some other ones appropriate for the ages. I’d like to make a nice resource for other iKid believers.)
3 Months – 12 Months
- Interactive Alphabet by Piikea. This is by far the best app I’ve seen for the youngest of kids. It has a baby mode which prevents babies from exiting by accidentally batting a menu button and most of the items respond to simple taps or swipes.
- Juno’s Musical ABCs by Juno Baby. This app also goes through the alphabet but with a musical theme. The interactions aren’t as neat as the Piikea app and the button to return to the menu is prominent and easily pressed.
- Peekaboo Baby. This is my app. Warning, it is very simple. I was learning MonoTouch and wrote it in a day as an experiment.
12 Months to 18 Months
- Seuss ABC, Green Eggs These stories have autoplay, read to me, or self-reading features and will say the word of anything the child touches on the screen. There is actually an entire line of the Dr. Seuss books, but I prefer these two. The ABC app is great because each letter is said multiple times. The Green Eggs app is my daughter’s favorite, and I suspect this is because so many of the words in this story (eggs, boat, house, mouse, car, train, etc.) are ones most 18 month olds know. These books are a little long so if you’re more interested in the stories, go with the Bright and Early Board Books instead of these apps. The Mercer Mayer, Little Critter books are also available and tend to be shorter in length.
- I Hear Ewe This neat little app has three screens of picture tiles: two of animals, one of vehicles. When touched it says: "this is the sound a [insert animal or vehicle here] makes:" I like this because it doesn’t require page navigation. A child can sit and do this for a short period and when they get bored, you can switch the screen for them. Sophia plays this occasionally at 18 months but it doesn’t hold her interest as much, so I suggest trying it at a little younger age.
- Pat the Bunny by Random House. There is both a paint and interactive option with this app. The paint seems to always crash, most likely due to the mad tapping of a toddler, so I avoid it. The read option has a bunch of items on the screen that kids can interact with (turn off a light, put shave gel on daddy’s face, wave bye bye, play peek a boo, etc.) I’ve never seen the real book, but I wouldn’t be surprised if this app is better than the book. Changing screens is manual and may require adult help. There is an obnoxious Easter egg on every page that brings up the bunny.
- Princess Baby by Random House. I was actually disappointed there wasn’t more to this app, but Sophia has played it enough that it makes the list. It begins by having you “Choose your favorite princess.” Each princess has 3 toys that can be interacted with in a very limited way: wand, drum, ball, flower, blocks, cat. The princess can be put to bed, which Sophia likes doing over and over and over again.
18 Months +
- A Monster at the end of this book. Starring your lovable, furry pal Grover from Sesame Street, this app has a very cute storyline. In order to advance through the book certain tasks, such as touching knots to untie the page or knocking down bricks must be performed. This is another one where the app may be better than the book itself. One bonus: the pages are locked when Grover is talking, which keeps an eager toddler from advancing through too quickly. My daughter loved this book earlier on but I had to help her with some of the action pages and it was just recently that she started doing it all on her own.
- Another Monster at the end of this book. Starring Grover and Elmo, some of the tasks are a little trickier than the first book (matching colors, wiping away glue), but did I mention it has Elmo?
- Little Fox by GoodBeans. This is one of my favorite apps. It has 3 different songs to choose from and each has its own scene: London Bridge is Falling Down, Old MacDonald, and The Evening Song. Each scene is cleverly interactive and entertaining. Old Mac Donald has 4 seasons to select from and the interactions change based on the season. There is also a little "fox studio" with a ton of interactive objects used to make music.
- Nighty Night by GoodBeans. Adorable. The animals at the farm house need to go to sleep. This is done by clicking on the area each animal resides in and turning off the light. The animals respond to touch. Additional animals can be purchased (2 sets of 3 animals each).
- Itsy Bitsy Spider by Duck Duck Moose. Another fantastic app, this may be the one Sophia has clocked the most time with. In order to progress through this app, you must click on the spider. Each time the spider is touched one line of the song is sung and the spider moves. There is a lot to interact with at each spot and one the second time through the song there are decorated eggs the child can collect on the spider’s back. There is a cute little narrator fly that teaches the child about items the child clicks on (i.e clouds, the sun, rainbows).
- Ewe Can Count. This is a cute counting game where you count a random number of sheep, horses, apples, etc. There is a learning and a quiz mode.
- Logic Lite. This app is great because it teaches the complicated click and drag gesture. The full version has three additional tile sets: Numbers – match dots to the written number, Pictures – match a picture that contains a shape to the shape it contains, and Letters. The letters are great at 18 months, but the other two are too complex.
Your mileage may vary
Having your little one use an iPad might not work out as well as it has for us, so I think it is only fair to disclose some of the circumstances which govern our life that may help to make our experience successful.
- My wife is a stay at home mom. She used to be a techie, but left the digital world to raise our daughter. I only bring this up, because she interacts with Sophia all day. If we were putting Sophia in day care, I would be more hesitant to give her the iPad during our interactive time with her. (But I would probably try to get the day care to let her use it.)
- We have almost 0 TV in our house. I don’t watch any TV at all or movies. My wife very rarely watches TV and Sophia never does. I think this is important, because if she were watching TV, I would also be a bit more hesitant to let her play with the iPad as much.
- We do LOTS of other activities. Just about every day of the week she has either swimming, gym class, play date, or something else going on. My point here is that she gets plenty of outside time, social interaction and physical activity.
- Sophia took the to the iPad right away. We didn’t have to force it on her or even encourage her to use it. I don’t know if other kids are like this or not, although I suspect most would be.
So doing the same thing my wife and I are doing might not be the best for you family—you’ll have to decide for yourself—but as far as our daughter has been concerned the experience has been overall positive and beneficial.
I’ve been playing around quite a bit with MonoGame lately and thought I would take some time to write a bit about it and talk about how to get started.
I’m also currently working on a Pluralsight course on cross platform development with MonoGame.
What is MonoGame?
Well, if you are familiar with XNA, then you already know what MonoGame is.
If you are not familiar with XNA though, it is basically a game development framework that allows for creating games quickly without having to write all that repetitious code that all games need.
Basically it makes creating games more about the game and less about the technical details.
The only problem with XNA is that it only really works for Windows, XBox360 and Windows Phone 7. If you want to create a game on Android and iOS, you can’t use XNA.
This is where MonoGame comes in. MonoGame is an open source port of the XNA framework that can run on many more platforms that Microsoft’s XNA.
Great, so what does this actually mean?
Well, if you are interested in game development, especially if you are interested in game development for the most popular platforms today, MonoGame might be able to help you to write pretty close to the same exact code and have it work on Android, iOS, Windows 7, Windows 8, Windows Phone 7, MacOS, XBox 360, Linux and the new Playstation console.
That is pretty awesome! Especially if you are trying to monetize your effort.
In my mind MonoGame helps overcome two huge barriers to getting into game development.
- Difficulty of monetizing the effort. By allowing the same code to be shared on most platforms, a game developer can get paid for their effort in multiple marketplaces.
- Not knowing where to get started. The XNA API is so simple to use that you can get a simple game, like a Pong clone for example, up and running in about a couple of hours.
Also, because MonoGame is basically just XNA, you can find a whole host of resources on how to develop a game using the platform.
In my upcoming Pluralsight course, I show how to create a Pong clone on Windows and then we get that game up and running on Android, iOS and Windows Phone 7, with minimal changes.
It can be a bit challenging to find good information to get started in each platform using MonoGame, but the basics are located on the Github page.
For the Windows tutorial there, you can use Visual Studio instead and use the MonoGame installer.
For Android development, you can use Visual Studio as long as you have Mono for Android installed and all you really need to do is link your files from your Windows project and create a small bit of startup code in an Android Activity to start the game.
For iOS development, you will need to use MonoDevelop, which is packaged with the install of MonoTouch. MonoTouch itself uses XCode and the iPhone SDK, so you have a bit more installing to do there, but the idea is pretty much the same. One you have MonoTouch running on your Mac, you can link over the files from your Windows project, add a small bit of startup code, and you are up and running. (You’ll also need to download the actual MonoGame source code to add to your project, since there isn’t an installer for Mac currently.)
Xamarin also has a seminar they did on MonoGame to help you get started.
True cross platform development, finally
At least for game developers. For other applications in the mobile space, there are some solutions that help you share your code, but nothing that really allows you to have near 100% portability without a big sacrifice.
I was pretty amazed the first time my game just ran on my Android and iOS devices with virtually no changes.
I’d definitely encourage you to check out MonoGame and stay tuned for my Pluralsight video on the topic, where I will go through all the details of creating a game and getting it running on most of the major platforms.