When I first started out my career as a software developer, I didn’t have a degree.
I took my first real job when I was on summer break from my first year of college. By the time the summer was up and it was time to enroll back in school, I found that the salary I was making from that summer job was about what I had expected to make when I graduated college—only I didn’t have any debt at this point—so, I dropped out and kept the job.
But, did I make the right choice?
Do you really need a university degree to be a computer programmer?
The difference between education and school
Just because you have a college degree doesn’t mean you have learned anything. That is the main problem I have with most traditional education programs today. School has become much more about getting a degree—a piece of paper—than it has about actually learning something of value.
To some extent, I am preaching to the choir. If you have a degree that you worked hard for and paid a large amount of money for, you are more inclined to believe that piece of paper has more value than it really does.
If you don’t have a degree, you are probably more inclined to believe that degrees are worthless and completely unnecessary—even though you may secretly wish you had one.
So, whatever side you fall on, I am going to ask you to momentarily suspend your beliefs—well, biases really—and consider that both views are not exactly correct, that there is a middle-ground somewhere in between the two viewpoints where a degree isn’t necessarily worthless and it isn’t necessarily valuable either.
You see, the issue is not really whether or not a particular degree has any value. The degree itself represents nothing but a cost paid and time committed. A degree can be acquired by many different methods, none of which guarantee any real learning has taken place. If you’ve ever taken a college course, you know that it is more than possible to pass that course without actually learning much at all.
Now, don’t get me wrong, I’m not saying that you can’t learn anything in college. I’m not saying that every degree that is handed out is a fraud. I’m simply saying that the degree itself does not prove much; there is a difference between going to school and completing a degree program and actually learning something.
Learning is not just memorizing facts. True learning is about understanding. You can memorize your multiplication tables and not understand what they mean. With that knowledge, you can multiply any two numbers that you have memorized the answer for, but you would lack the ability to multiply any numbers that you don’t already have a memorized answer for. If you understand multiplication, even without knowing any multiplication tables, you can figure out how to work out the answer to any multiplication problem—even if it takes you a while.
You can be highly educated without a degree
Traditional education systems are not the only way to learn things. You don’t have to go to school and get a degree in order to become educated. Fifty years ago, this probably wasn’t the case—although I can’t say for sure, since I wasn’t alive back then. Fifty years ago we didn’t have information at our fingertips. We didn’t have all the resources we have today that make education, on just about any topic, so accessible.
A computer science degree is merely a collection of formalized curriculum. It is not magic. There is no reason a person couldn’t save the money and a large degree of the time required to get a computer science degree from an educational institution by learning the exact same information on their own.
Professors are not gifted beings who impart knowledge and wisdom on students simply by being in the same room with them. Sure, it may be easier to obtain an education by having someone spoon-feed it to you, but you do not need a teacher to learn. You can become your own teacher.
In fact, today there are a large number of online resources where you can get the equivalent of a degree, for free—or at least very cheap.
Even if you have a degree, self-education is something you shouldn’t ignore—especially when it’s practically free.
You can also find many great computer science textbooks online. For example, one the best ones is: Structure and Interpretation of Computer Programs – 2nd Edition (MIT Electrical Engineering and Computer Science)
So, is there any real benefit to having a degree?
My answer may surprise you, but, yes right now I think there is.
I told you that I had forgone continuing my education in order to keep my job, but what I didn’t tell you is that I went back and got my degree later. Now, I didn’t go back to college and quit my job, but I did think there was enough value in having an actual computer science degree that I decided to enroll in an online degree program and get my degree while keeping my job.
Why did I go back and get my degree?
Well, it had nothing to do with education. By that point, I knew that anything I wanted or needed to learn, I could learn myself. I didn’t really need a degree. I already had a good paying job and plenty of work experience. But, I realized that there would be a significant number of opportunities that I might be missing out on if I didn’t go through the formal process of getting that piece of paper.
The reality of the situation is even though you and I may both know that degrees don’t necessarily mean anything, not everyone holds the same opinion. You may be able to do your job and you may know your craft better than someone who has a degree, but sometimes that piece of paper is going to make the difference between getting a job or not and is going to have an influence on how high you can raise in a corporate environment.
We can’t simply go by our own values and expect the world to go along with them. We have to realize that some people are going to place a high value on having a degree—whether you actually learned anything while getting one or not.
But, at the same time, I believe you can get by perfectly well without one—you’ll just have a few less opportunities—a few more doors that are closed to you. For a software developer, the most important thing is the ability to write code. If you can demonstrate that ability, most employers will hire you—at least it has been my experience that this is the case.
I have the unique situation of being on both sides of the fence. I’ve tried to get jobs when I didn’t have a degree and I’ve tried to get jobs when I did have a degree. I’ve found that in both cases, the degree was not nearly as important as being able to prove that I could actually write good code and solve problems.
So, I know it isn’t necessary to have a degree, but it doesn’t hurt either.
What should you do if you are starting out?
If I were starting out today, here is what I would do: I would plan to get my degree as cheaply as possible and to either work the whole time or, better yet, create my own product or company during that time.
I’d try and get my first two years of school at a community college where the tuition is extremely cheap. During that time, I’d try to gain actual work experience either at a real job or developing my own software.
Once the two-year degree was complete, then I’d enroll in a university, hopefully getting scholarships that would pay for most of my tuition. I would also avoid taking on any student debt. I would make sure that I was making enough money outside of school to be able to afford the tuition. I realize this isn’t always possible, but I’d try to minimize that debt as much as possible.
What you absolutely don’t want to do is to start working four year later than you could be and have a huge debt to go with it. Chances are, the small amount of extra salary your degree might afford you will not make up for the sacrifice of losing four years of work experience and pay and going deeply into debt. Don’t make that mistake.
The other route I’d consider is to completely get your education online—ignoring traditional school completely. Tuition prices are constantly rising and the value of a traditional degree is constantly decreasing—especially in the field of software development.
If you go this route, you need to have quite a bit of self-motivation and self-discipline. You need to be willing to create your own education plan and to start building your own software that will prove that you know what you are doing.
The biggest problem you’ll face without a degree is getting that first job. It is difficult to get a job with no experience, but it is even more difficult when you don’t have a degree. What you need is a portfolio of work that shows that you can actually write code and develop software.
I’d even recommend creating your own company and creating at least one software product that you sell through that company. You can put that experience down on your resume and essentially create your own first job. (A mobile app is a great product for a beginning developer to create.)
What if you are already an experienced developer?
Should you go back and get your degree now?
It really depends on your goals. If you are planning on climbing the corporate ladder, then yes. In a corporate environment, you are very likely to hit a premature glass-ceiling if you don’t have a degree. That is just how the corporate world works. Plus, many corporations will help pay for your degree, so why not take advantage of that.
If you just want to be a software developer and write code, then perhaps not. It might not be worth the investment, unless you can do it for very cheaply—and even then the time investment might not be worth it. You really have to weigh how much you think you’ll be able to earn extra versus how much the degree will cost you. You might be better off self-educating yourself to improve your skills than you would going back to school to get a traditional degree.
It’s a great idea to educate yourself.
I fully subscribe to the idea of lifetime learning–and you should too.
But, in the software development field, sometimes there are so many new technologies, so many things to learn, that we can start to feel overwhelmed and like all we ever do is learn.
You can start to feel like you are always playing catch-up, but never actually getting ahead–not even keeping up. The treadmill is going just a few paces faster than you can run, and you are slowly losing ground, threatening to drop off the end at any time.
Trying to learn too much
The problem is trying to learn too much. There are 100 different technologies you have to work with or want to work with at your job. You might feel that in order to be competent, in order to be the best you can be, you need to learn and master all of them. The problem though, is that you feel like you haven’t even mastered one of them.
It can be a pretty disparaging feeling. To counter that feeling–which sometimes demonstrates itself as impostor syndrome–you grab books, video courses, and all kinds of resources on all the technologies you feel that you need to master.
You spend your nights and weekends reading books, going through online training, and reading blog posts.
But. is it really effective, or does it just stress you out more?
Do you even remember half of what you read?
Will you actually ever use it, or are you storing it away for a someday-I-might-need-this-bucket?
My point isn’t that you shouldn’t be learning, it’s just that perhaps you are placing too much pressure on yourself and trying to learn too much.
I’m only saying this, because I’ve been there. I’ve done that. I know how it feels.
I also know that this forced pace of learning isn’t very effective. I don’t remember much of the majority of books I read about technologies I didn’t end up using or barely ended up working with.
I know that the technologies I learned the best were the technologies that I actually put into practice. In fact, some of my most useful, and retained learning, came from learning I did on the spot, right when I was working on a problem I couldn’t solve and had to go find an answer.
It may seem strange that someone like me who makes a decent portion of their living creating learning materials for software developers would tell you to not try and learn too much.
It probably would make much more sense for me to preach that you should absorb all the information that you can; that you should be continuously watching my Pluralsight courses while you eat, sleep and commute to work.
But, the truth is, I don’t think that is the most effective way to learn. I don’t think you’ll get much out of one of my courses, or anyone else’s, if you just repeatedly watch them.
Instead, I think the best way to improve your skills and to learn what you need to do is to do the learning as close to the time you need the information as possible–just-in-time learning.
Now, this doesn’t mean that you should just start working with a technology before you know anything about it. You’ll waste a lot of time flopping around trying to get started if you just dive right in without any prior knowledge. But, I’ve found you only need to learn three things before you can dive in and start working with a technology:
- How to get started
- What you can do with the technology–how big it is.
- The basics–the most common things you’ll do 90% of the time.
It is no coincidence that I structure most of my online courses in that way. I try to tell you how to get started, show you what is possible and then give you the very basics. I try to avoid going into details about every little aspect of a technology, because you are better off learning that information when you need it. As long as you know what you can do, you can always find out how later.
Often the hardest part of learning a new technology is learning what is possible.
I’ve found that the faster you start actually using a technology and trying to solve real problems with it the better. Once you’ve covered the three bases I’ve mentioned above, your time is much better spent actually working with the technology rather than learning about it further.
It’s difficult to break away and jump in though. Our instincts tell us that we need to keep reading, keep watching videos and continue to learn, before we get started.
You might feel compelled to master a technology before you start using it, but you have to learn to resist the urge. You have to be willing to fail and learn your way by making mistakes and hitting roadblocks. Real learning takes place when you use information for a purpose, not by trying to acquire it ahead of time.
If you know what can be done in a technology and you know enough of the basics, it won’t be difficult to figure out what search term you’ll need to come up with in order to answer any questions you have along the way. This just-in-time learning will be more effective in the long run and save you many wasted hours consuming information that you won’t fully digest.
You can’t know everything
Even if you had all the time in the world to learn, and even if you apply just-in-time learning techniques, you still won’t ever learn a fraction of what there is to learn in the software development field. New technologies are popping up every day and the depth of existing ones increases at an alarming rate.
It is important to face the reality that you just can’t know it all. Not only can you not know it all, but what you can know is just a tiny fraction of what there is to know.
This is one of the main reasons why I talk about specializing so much. You are much better off picking a single technology that you can focus on learning in-depth than spreading yourself too thin trying to be a master at everything.
That doesn’t mean you shouldn’t expand your skills in many different directions; you should definitely try to have a broad base. Just don’t expect to be an expert in more than one or two main areas of focus. Try to focus your learning on two main areas:
- A single specialty that you will master
- General software development skills that will apply to more than one technology. (For example, a book like Code Complete falls in this category.)
Don’t try and spread yourself too thin. Rely on your ability to learn things as you need them. If you have a solid base, with time and experience, you’ll find that you can learn whatever you need to know when you need to know it. Trust yourself.
Sometimes it can seem like there are super-programmers out there who seem to know everything and can do everything, but it is only an illusion. Those super-programmers are usually programmers that have one or two areas of expertise and a large amount of general knowledge that they apply to a variety of different domains.
Software developers usually make pretty decent salaries, but did you know that companies that hire software developers usually make much more money off of a single software developer than they pay that software developer?
I guess, if you think about it, it is common sense. Why hire programmers if those programmers don’t make more money for your company than the salary you are paying them?
But sometimes this disparity between what a software developer actually makes and the value that software developer brings to the table is large—sometimes it’s really large.
In fact, if you are being paid an hourly rate as a contractor, you are probably making about half of what the client is being billed for, if even that.
Being a commodity
One of the big problems many software developers face is that they can be easily treated as a commodity.
This problem is becoming more and more prevalent as basic programming skills become easier to come by and more and more people are becoming programmers all over the world.
If you go onto oDesk or ELance today, you can find software developers willing to write code for less than $10 an hour; you can find really good software developers writing code for $25 an hour.
If you are letting yourself be treated like a commodity and the price of that commodity is dropping, you are in big trouble.
Forget about job security at a single job. You’ve got to worry about your entire career and all the investment you put into your skills.
If you want a long and prosperous future doing what you love to do, you’ve got to be able to justify why someone should hire you and keep paying you at your current rate instead of hiring someone at $10 an hour to do the same work.
What makes something a commodity?
In order to solve this problem, you’ve got to examine what exactly it is that makes something a commodity.
But, before we go any further, let’s take a moment to make sure we are on the same page about what a commodity is.
I like this definition from the Wikipedia entry on Commodity:
“The exact definition of the term commodity is specifically applied to goods . It is used to describe a class of goods for which there is demand, but which is supplied without qualitative differentiation across a market.”
The key thing here is “without qualitative differentiation across a market.”
This means that if the service or product you provide isn’t much different than what everyone else is selling, it can be considered a commodity. And, as such, the price will be determined by the market, not by the actual value you provide.
So, even though you may be providing your employer with $500,000 worth of value per year by writing code, your employer can turn around and pay you whatever the market says a software developer with your years of experience and skill level is worth.
That is unless…
You find a way to be something more than a commodity
That is the key to being paid what you are actually worth instead of what the commodity market for software developers says you are worth.
But, it isn’t easy to stand out. It isn’t easy to be perceived as something more than a commodity if you don’t know how to do it.
I want to show you an example of how some people break out of commodity markets and differentiate themselves to make more money.
Have you ever heard of a voice-over?
A voice over is when you have someone who has good oratory skills or a particular accent, or sounds create a recording for something like an advertisement or a cartoon character’s voice in a cartoon.
There is quite a big market for people who do voice overs. Just about every radio ad, podcast advertisement, and animated film or show needs voice over talent to create voice overs.
But, did you know it is a commodity market?
That’s right; I can actually go onto Fiverr.com and pick from a multitude of skilled voice over actors to do a voice over for me for $5. Not only can I do it—I have done it. I’ve hired two different voice over actors to do voice overs for my podcast for just $5.
But, believe it or not, some voice over actors get paid millions of dollars each year to do basically the same work.
So, what separates the voice over actors who get paid millions from the ones who get paid five bucks?
I’ll give you a hint—and it’s not talent—it’s marketing.
Those voice over actors that are making the big bucks have figured out how to market themselves to land the right gigs, which increases the value of their name and gets them more and higher paying gigs.
If you don’t believe me, go on Fiverr.com yourself and check out the talent level of some of the top people on there that are doing voice overs for just five dollars—you will be impressed.
No one tells software developers how to market themselves
In the entertainment industry self-promotion and marketing is the name of the game.
There are whole companies that do nothing but market talent. I mean, actors have agents, so do musicians, and yes, even people who do voice overs have agents… at least the successful ones do.
But, when it comes to software development, you are not very likely to find the same kind of resources of knowledge about self-promotion and advertising that envelope the entertainment world.
Have you ever heard of a software developer having an agent?
Well, even though it sounds silly, you’ve got to be your own agent if you want to rise above the crowd and stand out. If you want a chance at making the big bucks and setting your own price, you’ve got to figure out how to market yourself.
There are plenty of software developers that are already doing it. You’ve heard them on popular podcasts and read articles written by them in trade magazines or heard them speak at conferences.
But, no one ever talks about how they achieve their success… at least not until now.
Over the past few years, I’ve been talking to developers who have broken away from the herd. I’ve studied their careers and asked them about how they’ve achieved their success. I’ve been able to duplicate their results to a large degree myself, and since no one else is doing it, I want to share that information with you now.
Check out this package I am putting together called “How To Market Yourself As A Software Developer.” I’m going to be launching this this package, on March 27th.
Well, I hope this article has been helpful to you and helped you realized that you’ve got to make a fundamental shift in your thinking if you want to be able to really advance your career and not be treated like a commodity.
I often get asked by beginner programmers what programming language they should learn.
This, of course, is a tough question to answer. There are so many different programming languages today that a new developer, or even a seasoned developer, wishing to retool his or her career, could learn.
I’ve actually tried to answer this question before in a YouTube video, but I want to revise and refine my answer a bit here, because some of my views have changed and I’d like to give a bit more detail as well.
The wrong question to begin with
It turns out that what programming language you choose to learn is not actually all that important
Things have changed quite a bit from back when I first started my career in software development. Back when I first started out, there were much fewer choices of programming languages and there were much fewer resources available for reference. As a result, the choice was much more important.
For example, I started out learning C and then C++. At that time, it took quite a bit of work to master the language itself and to understand all of the standard libraries that were available. A good C or C++ programmer back then had a very in-depth understanding of every nook and cranny of the language and they needed this knowledge, because of two main reasons.
- References were not as widely available, so figuring out a syntax or library available involved flipping through a huge book, rather than just typing some keywords into Google.
- Programming, in general, was done at a much lower level. There were far fewer libraries available to be able to work at higher levels, so we spent more time working with the language itself and less time working with APIs.
Contrast that with the programming environment of today, where not only is information widely available and can be accessed with ease, but also there are a large number of programming languages that we effectively use to program at a much higher level due to the vast amount of libraries and reusable components available to us today.
In today’s programming environment, you tend to not need to dive as deeply into a language to be effective with it. Sure, you can still become an expert in a particular programming language, and it is good to have some amount of depth in at least one language, but you can literally learn a new language in less than a week and be effective with it almost immediately.
Now, before your alarm bells go off and you write me off as crazy, let me explain that last sentence in a bit more detail.
What do you mean you can learn a programming language in a week?
What I mean by this is that once you understand the basic programming constructs available in just about all programming languages, things like conditionals, loops and how to use variables and methods, you can take that same knowledge to a different programming language and just learn how to do those same things in that language’s syntax. In fact, most IDEs today will even help you with the syntax part, making your job even easier.
If you are already fluent in multiple programming languages, you probably agree with what I am saying, but if you have only ever learned one programming language or none at all and are looking to learn your first programming language, you might be a little skeptical. But, take it from someone who has learned and taught programming languages which I have learned in a week, the basics are pretty much the same.
Check out this book which basically deals with this exact subject, Seven Languages in Seven Weeks: A Pragmatic Guide to Learning Programming Languages.
Now, if you are just starting out, it is pretty unlikely you’ll be able to learn a whole programming language in a week. This brings us to the question, you may be asking yourself…
So, what programming language should I learn then?
Hold up. I’m still not quite going to answer that question. Because, it still isn’t quite the right question.
Instead of getting hung up on what programming language you want to learn, you should instead ponder what you want to do.
Learning by doing is the most effective way to learn, especially if you are doing something you have an interest in or is fun to you.
So, I always start new want-to-be developer out by asking them what they want to build.
Do you want to build an Android application? How about an iOS application? A web page? A game?
First, figure out the answer to this question and then let that answer guide you to choose the technology and programming language you will use to achieve that goal.
Don’t worry so much about which programming language or technology is most valuable. You can’t make a wrong decision and regret it later, because it won’t take you much time to retool later if you need to. Once you have the basics down and have actually used a programming language to build something, you’ll find that doing it again will be much easier.
I like to encourage new developers to write a mobile application—especially an Android application.
Here are some reasons why:
- A complete Android application can be built by a single person. Creating a complete application will really help you to feel confident about software development and is one of the best ways to really learn to code. I spent a good deal of my early career only being able to create bits and pieces of things, and it was frustrating, because I never knew if I could really “code.”
- By learning Android, you learn Java and how to use libraries and APIs. This will give you a good programming language to start with and you’ll get some valuable experience with APIs.
- Google gives you some free help and makes things pretty easy to learn. Since Google really wants you to create Android applications, they have put quite a bit of work into creating easy to use tools and tutorials to help you be successful quickly. (I’ve also created some tutorials, which you can watch at Pluralsight here as well.)
- You can actually make money while learning and teach yourself a very valuable and upcoming skillset. Not only can you sell your Android application or monetize it in some other way, but you will be learning a complete set of software development skills for a platform that is in very high demand.
Aha! So Java it is then?
No, not exactly.
Summing it up
I’m actually working on some products to help developers manage their careers and lives better which will cover topics like this one a bit more in-depth. If you are interested in receiving some updates when I publish an interesting article or video, or when I launch some of those products, feel free to sign up here. Don’t worry, I won’t spam you. J
Computer science itself is a surprisingly difficult thing to define.
If you do a search on the web you’ll turn up quite a few definitions for computer science.
Some people take a very academic approach and say that computer science is about studying computation and systems of computation. (What the heck does this mean?)
Other people will define it in terms of what they believe it is about, saying things like computer science is using computers to solve problems.
None of these definitions or attempts at defining computer science sit well with me.
Why it is important to know what computer science is?
So, you might be thinking “who cares?” “What does it matter what computer science is?”
Well, the reason why it matters is because many of us programmers and software developers either have a degree in computer science, are studying to get a degree in computer science, or simply associate our knowledge with the field of computer science.
It seems kind of silly to have a degree in something that you can’t clearly define nor can anyone else.
Have you ever thought about how strange it is that when someone gets a degree in biology they become a biologist, but when someone gets a degree in computer science they become a senior software engineer?
Why is computer science so hard to define?
Before we can even really attempt to define what computer science is, we have to understand a bit about why it is so hard to define.
It all stems from the problem that unlike many other sciences, computer science isn’t based on any naturally occurring phenomenon. Even social sciences and formal sciences deal with things that exist in the world without our creating them.
But, computers are a creation of humans. At least the kind of computers that we commonly recognize as computers and use in the field of computer science.
Everything within the idea of computer and computation is completely fluid and lacks perfect definition including the idea of a computer itself.
We can’t even define computers
I spent several weeks researching what a computer is and I found there is no commonly accepted definition of a computer. It wasn’t even till around 1940 that the definition of computer transferred from the term for someone operating a computing machine to the actual machine itself.
No, that is not a typo and I’m not off my rocker. It is a true statement. Consider whatever computer you are using to read these very words; are you actually running your software directly on the hardware?
Most likely you have some sort of operating system which is running on your hardware that is virtualizing the actual physical hardware to a large degree. Most applications are written to run on an operating system, not to run on actual hardware. This, in effect makes the operating system the computer.
We can take it one more level and say that the very web browser you are using to view this post, is itself a computer, since a browser can be programmed to run just about any application (That is to say that a browser is Turing Complete).
Data and code
Even the text content of this blog post, which most people would say is data, is actually code, because it is programming the web crawlers from search engines like Google and telling them what to do based on the actual content of the words. You may have even found this post due to my attempts to program Google’s web crawler through SEO techniques.
Everything about a computer is fluid and abstract
All of this leads up to the realization that just about everything in the computer world is defined in terms of abstractions which are always “leaky.” Other branches of science like mathematics and chemistry are much less abstract and based on real concrete things that exist in reality.
What then is computer science?
Ok, so now is the part where you probably expect that I will tell you what exactly computer science is, right? Wrong.
Computer science is nothing. It isn’t a thing that can be defined, because it is not a thing that exists beyond our imaginations.
Computer science itself is an abstraction that we have created for dealing with all the things involved with computers which is itself and abstraction for all kinds of programmable machines.
It is a false science. Everything you might study in a computer science program actually belongs in some other field of science, but is conveniently grouped into the computer science abstraction in order to catalogue the important things that form the basis of how we build computers and how we program them.
This isn’t to say that the things you would learn in a computer science program don’t have value. It is important to know about algorithms, data structures, computer architecture and the mathematics that many of these things are based on, but it is important to realize that the study of computer science is very different than the study of other fields, because most of the concepts you learn about in a computer science program are abstractions. They are not rules that are set in stone, like the forces of nature that form the basis of other fields of study.
I say this because it is important to understand that everything we learn about computers is based on abstractions we have created to simplify the complex and these abstractions are based on very few actual rules that can’t be broken.
We will eventually hit a point where many of these existing abstractions about computers that we have put into place and built our knowledge upon, will have to be broken and replaced with new abstractions.
The recent free courses from Pluralsight on teaching kids to program really got me thinking about this subject.
There seems to be a big backlash in development community against the idea that everyone should learn to program.
I’m not sure exactly where it is coming from, but I suspect it has something to do with egos and fear.
Even within the development community, there seems to be a distinction between “real programmers,” and “not real programmers,” based on language or technology choice.
I have to admit, I have been guilty of this type of thinking myself, because a very easy way to increase our own value is to decrease the value of others.
But what I have come to find is that not only is the distinction between “real programmers” and “not real programmers” a false dichotomy, but that the distinction between a programmer at all and a layperson, is also not quite as clear, or at least it shouldn’t be.
Not everyone should be a programmer
It’s true. Just like not everyone should be an accountant, or not everyone should be a writer, but I think we can all agree, that everyone should understand basic math and be able to write.
Learning how to program and doing it professionally are two distinct things and they should not be lumped together.
It it pretty hard to imagine a working world where no one except writers could write.
Imagine wanting to send an email to your boss, but you don’t know how to write, so you have to ask the company writer to do it for you.
That is what the world would be like if we insisted that only writers needed to learn how to write.
But perhaps you think I am just being silly, I mean the need to write is so prevalent in everyday situations, but the need to program isn’t.
But I challenge you to consider if whether it is actually true that the need to write is much more prevalent than the need to program, or because everyone knows how to write, the need for writing is just recognized more.
Imagine if everyone you interacted with on a daily basis knew how to write code. Imagine that, just like everyone has a word processor on their computer that they know how to use, there was an IDE that allowed them to write simple scripts.
Think about how that changes the world.
The first thought that comes to my mind in that world is that there would be APIs everywhere.
Every single program would have an easily accessible, scriptable API, because every user of that program would want to be able to automate it.
In time, the way we viewed the world would completely change, because just like products today are designed with the thought that users of those products can write, products of that time period would be designed with the assumption that users of those programs can program.
Suddenly everything becomes accessible, everything interfaces with everything else.
Doctors build their own simple tools based around their specific process by combining general purpose software from their equipment.
There is a Pinterest full of code snippets instead of pictures.
Every device and piece of software you interact with has an API you can use to automate it.
The point is that we can’t conceive what the world would look like if programming was as prevalent as writing, but such a world can and should exist.
Computers and technology are such a large part of everyone’s lives that it is becoming more and more valuable to be able to utilize this so common element.
It starts with kids
We have to stop thinking programming is hard and realize that it is one of the easier things we can teach kids to do.
If a person can grasp and use a complex language, such as English, that person can learn how to program.
Programming is much more simple than any spoken or written language.
But, we have to stop erecting these artificial barriers that make programming computers seem more difficult than algebra.
Is there really much difference between an algebraic variable and a variable in a programming language?
Isn’t most mathematics solved by learning an algorithm already? Why not at the same time, teach how to program that algorithm? Not only would it make the subject much more interesting, but it would build a valuable skill as well.
We spend a great deal of time educating kids with knowledge they will never use—basically filling their minds with trivia. But, how much more likely would they be to use the skills learning to program would give them?
What was hard yesterday is easy today
Calculus, geometry, probability, the structure of a living cell, electricity… What do they all have in common?
These concepts used to be advanced topics that only the most educated in society knew about or discussed, but now have become common knowledge that we teach children in school. Ok, well maybe not calculus, but it should be.
Over time, the concepts that only the brightest minds in a field could possibly understand are brought down to the masses and become common knowledge.
It is called “standing on the shoulders of giants,” and it is the only way our society advances as a whole.
Imagine if it was just as difficult for us to grasp the concepts we are taught in school as it was for the pioneers of that knowledge to obtain it… We wouldn’t ever advance as a whole.
But, fortunately, what is hard yesterday ends up being what is easy today.
The same will eventually happen with computer programming, the question is just how long do we need to wait?
It’s all about breaking down walls
I try to never say that something is hard, because the truth is that although there are some things in life that are hard, most things are easy if you have the right instruction.
It is natural for humans to want to think the knowledge or skills they have acquired is somehow special, so naturally we have a tendency to overemphasis the difficult in obtaining that knowledge or set of skills, but we’ve got to work through the fear of job security and egos and remove the veil of complexity from programming and make it simple.
The value we can bring by helping others to understand the knowledge we have is much greater than the value that using that knowledge alone provides.
Sophia got her first introduction to the iPad at about 3 months old.
As soon as she could sit in a rocker chair my wife and I let her start playing on the iPad.
We started off with just one game, Interactive Alphabet by Piikea. It is basically a game that goes through the Alphabet and lets the baby interact with some of the pictures.
We added a few more ABC type of games as she got a bit older, but we mainly just let her play with that one game, because we figured it would be great to let her start seeing letters and learning the alphabet as early as possible.
Right from the get-go she would swat at the screen. She didn’t immediately understand the cause and effect, but she quickly grasped the idea that when she hit the screen, something would happen.
After a while she became pretty good at being able to do the simple things in the ABC game. She would still swat the screen, but purposefully swat certain areas in order to do something like build a sandcastle.
Around 12 months, we started adding a bunch more apps. We added some interactive books and a couple of simple games.
Sophia was learning how to do many more things in the apps. She could point with a couple of fingers and very purposefully touch certain areas of the screen.
She really didn’t have any concept of touching and dragging though, and would often run into problems of having one hand leaning on the iPad which was causing the other hand’s touches not to register.
She’s now 18 months and she is an iPad master.
Sophia can now:
- Turn on the iPad
- Unlock the iPad
- Pick which app she wants to play out of her folders
- Use the home button to exit an app
- Double press the home button to switch to a recent app
- Navigate through menus in apps and get back to the app
- Use the table of contents in books to pick the page she wants
She also asks for the iPad by name. She has about 40 apps on the iPad that she subsumed from my wife. It seems like she is learning something new every day now.
The world is changing
Our children, especially the youngest ones, are growing up in an entirely different world than has existed ever before.
I know this has been said many times before and it could be argued that my generation also grew up in an entirely different world than my parents, but I think the change we are seeing now is much more substantial.
I predict that this generation will be known as the tablet generation. With Windows 8 now released we are going to see a rapid decline of non-touch devices. In a few years all laptops will be touch screen retina displays.
There are some fundamental changes going on in how we interact with computers and even what defines a computer.
Yes, I know you’ve heard all this before, but why is this important?
It is important because the real shift I see is the shift between a primarily analog focused world view to a primarily digital focused world view.
For me the iPad or the computer is an attempt to replicate some process or experience in the real world. No matter how long I work with computers or use these devices, I cannot escape my world view. Analog always comes first.
For our children things are different.
I can’t say for sure that picking up a pencil and being able to write is a skill that will even be necessary.
It is very likely that this coming generation will view things through the digital lens first and the analog world will be secondary.
I don’t mean they’ll be jacked into computer all day and live in a virtual world, but I do think that while we try to relate software to tangible things the coming generation is likely to view software as the primary and tangible objects as secondary.
Think about music. Ever had an 8Track? How about a cassette tape? CD anyone?
How do we think of music today? One word comes to mind—MP3.
What started out as a physical record eventually lost its purpose and is now so heavily digital that we tend to think in terms of the digital and don’t even consider the tangible anymore.
The same thing is currently happening with books, movies and to some degree money.
Why we let Sophia be an iKid
With the changing world, computer literacy is more important than ever before.
Even in the world we live in now, it is just about impossible to get any kind of non-labor intensive job without being able to use a computer.
If computer literacy is arguably going to be the most important skill for anyone to have in the future, why not start as young as they start to show an interest?
I think it is a huge asset to develop in our children the ability to use a computer as easily and mindlessly as the ability to eat with a fork and a spoon.
I wish I had that ability. I could be so much more efficient if I would stop writing down lists on pieces of paper and instead pull up my iPad or other tablet to jot down ideas and completely replace paper in my life.
And sure I could learn to wean myself off of the analog world, but I want my daughter to be able to think first in the digital world. She’ll be way more efficient and see things from a better perspective than I ever will.
Aside from that, my wife and I find that the iPad is an excellent learning tool to help Sophia learn to learn.
There are so many things she is able to teach herself using that iPad.
- Has a vocabulary of over 100 words
- Can count to 4 in order and count actual objects
- Can say most of her ABCs
- Can recognize most letters
- Can name many animals and objects
Much of what she knows she learned at her own pace based on what she was interested in playing on the iPad.
For example, one week she’ll be playing many of the numbers apps. For a whole month she just wanted to do alphabets.
The iPad gives her the freedom to be able to choose what she wants to learn and to do it effortlessly. She is developing the skills to be able to self-educate. Sure, we still read books to her and try to teach her, but she seems to get a large amount of her knowledge from what she learns playing on the iPad. (At least the reinforcement of what she has learned.)
Overall I don’t think there is any reason to stop her from playing on the iPad. I know some people equate it to TV, but I think it is fundamentally different. The apps she plays on the iPad are interactive. You can’t mindlessly sit and watch the iPad. Instead, there is a constant feedback loop that is not present with TV.
Also we can carefully monitor the apps she uses. The TV is an open system that brings unknown content into your house, where the iPad can be used as more of a closed system.
To summarize, I think we are preparing her for the future and giving her a huge head start in life.
How to get started
So you may be wondering how to best go about getting your baby or toddler started with the iPad.
While I’m not a child development expert, I can give you some advice from what my wife and I have learned in this process.
You can of course get a newer iPad or even another tablet, or the iPad mini, but just be aware of two things.
- Babies don’t have very precise coordination with their hands so small screen are going to be hard for them to use.
- Babies tend to throw things, especially when they get frustrated.
The next thing you need is apps. My wife, Heather, wrote up this section for me. So, if you notice the grammar is perfect and is written with a much higher skill level than my usual writing, that is why.
(Please let me know if you have some other ones appropriate for the ages. I’d like to make a nice resource for other iKid believers.)
3 Months – 12 Months
- Interactive Alphabet by Piikea. This is by far the best app I’ve seen for the youngest of kids. It has a baby mode which prevents babies from exiting by accidentally batting a menu button and most of the items respond to simple taps or swipes.
- Juno’s Musical ABCs by Juno Baby. This app also goes through the alphabet but with a musical theme. The interactions aren’t as neat as the Piikea app and the button to return to the menu is prominent and easily pressed.
- Peekaboo Baby. This is my app. Warning, it is very simple. I was learning MonoTouch and wrote it in a day as an experiment.
12 Months to 18 Months
- Seuss ABC, Green Eggs These stories have autoplay, read to me, or self-reading features and will say the word of anything the child touches on the screen. There is actually an entire line of the Dr. Seuss books, but I prefer these two. The ABC app is great because each letter is said multiple times. The Green Eggs app is my daughter’s favorite, and I suspect this is because so many of the words in this story (eggs, boat, house, mouse, car, train, etc.) are ones most 18 month olds know. These books are a little long so if you’re more interested in the stories, go with the Bright and Early Board Books instead of these apps. The Mercer Mayer, Little Critter books are also available and tend to be shorter in length.
- I Hear Ewe This neat little app has three screens of picture tiles: two of animals, one of vehicles. When touched it says: "this is the sound a [insert animal or vehicle here] makes:" I like this because it doesn’t require page navigation. A child can sit and do this for a short period and when they get bored, you can switch the screen for them. Sophia plays this occasionally at 18 months but it doesn’t hold her interest as much, so I suggest trying it at a little younger age.
- Pat the Bunny by Random House. There is both a paint and interactive option with this app. The paint seems to always crash, most likely due to the mad tapping of a toddler, so I avoid it. The read option has a bunch of items on the screen that kids can interact with (turn off a light, put shave gel on daddy’s face, wave bye bye, play peek a boo, etc.) I’ve never seen the real book, but I wouldn’t be surprised if this app is better than the book. Changing screens is manual and may require adult help. There is an obnoxious Easter egg on every page that brings up the bunny.
- Princess Baby by Random House. I was actually disappointed there wasn’t more to this app, but Sophia has played it enough that it makes the list. It begins by having you “Choose your favorite princess.” Each princess has 3 toys that can be interacted with in a very limited way: wand, drum, ball, flower, blocks, cat. The princess can be put to bed, which Sophia likes doing over and over and over again.
18 Months +
- A Monster at the end of this book. Starring your lovable, furry pal Grover from Sesame Street, this app has a very cute storyline. In order to advance through the book certain tasks, such as touching knots to untie the page or knocking down bricks must be performed. This is another one where the app may be better than the book itself. One bonus: the pages are locked when Grover is talking, which keeps an eager toddler from advancing through too quickly. My daughter loved this book earlier on but I had to help her with some of the action pages and it was just recently that she started doing it all on her own.
- Another Monster at the end of this book. Starring Grover and Elmo, some of the tasks are a little trickier than the first book (matching colors, wiping away glue), but did I mention it has Elmo?
- Little Fox by GoodBeans. This is one of my favorite apps. It has 3 different songs to choose from and each has its own scene: London Bridge is Falling Down, Old MacDonald, and The Evening Song. Each scene is cleverly interactive and entertaining. Old Mac Donald has 4 seasons to select from and the interactions change based on the season. There is also a little "fox studio" with a ton of interactive objects used to make music.
- Nighty Night by GoodBeans. Adorable. The animals at the farm house need to go to sleep. This is done by clicking on the area each animal resides in and turning off the light. The animals respond to touch. Additional animals can be purchased (2 sets of 3 animals each).
- Itsy Bitsy Spider by Duck Duck Moose. Another fantastic app, this may be the one Sophia has clocked the most time with. In order to progress through this app, you must click on the spider. Each time the spider is touched one line of the song is sung and the spider moves. There is a lot to interact with at each spot and one the second time through the song there are decorated eggs the child can collect on the spider’s back. There is a cute little narrator fly that teaches the child about items the child clicks on (i.e clouds, the sun, rainbows).
- Ewe Can Count. This is a cute counting game where you count a random number of sheep, horses, apples, etc. There is a learning and a quiz mode.
- Logic Lite. This app is great because it teaches the complicated click and drag gesture. The full version has three additional tile sets: Numbers – match dots to the written number, Pictures – match a picture that contains a shape to the shape it contains, and Letters. The letters are great at 18 months, but the other two are too complex.
Your mileage may vary
Having your little one use an iPad might not work out as well as it has for us, so I think it is only fair to disclose some of the circumstances which govern our life that may help to make our experience successful.
- My wife is a stay at home mom. She used to be a techie, but left the digital world to raise our daughter. I only bring this up, because she interacts with Sophia all day. If we were putting Sophia in day care, I would be more hesitant to give her the iPad during our interactive time with her. (But I would probably try to get the day care to let her use it.)
- We have almost 0 TV in our house. I don’t watch any TV at all or movies. My wife very rarely watches TV and Sophia never does. I think this is important, because if she were watching TV, I would also be a bit more hesitant to let her play with the iPad as much.
- We do LOTS of other activities. Just about every day of the week she has either swimming, gym class, play date, or something else going on. My point here is that she gets plenty of outside time, social interaction and physical activity.
- Sophia took the to the iPad right away. We didn’t have to force it on her or even encourage her to use it. I don’t know if other kids are like this or not, although I suspect most would be.
So doing the same thing my wife and I are doing might not be the best for you family—you’ll have to decide for yourself—but as far as our daughter has been concerned the experience has been overall positive and beneficial.
I’m not good at many things. Let me rephrase that. I’m not naturally good at many things.
There are many people who are smarter than me, process things quicker and overall just have a better aptitude for almost everything I do.
I’ll freely admit, I’ve been pretty successful in my field and in life in general. (At least according to my own measures of success.)
You might wonder how I can be so untalented, yet accomplish so much?
I must be doing something right.
I believe the key thing that has helped me to become successful and will continue to do so, is my ability to learn how to learn about a subject, self education.
I’ve found that it is only when you take ownership for the learning process and its result that you actually are able to accomplish the true goal of learning, which is the ability to put knowledge into action.
So what is learning to learn?
Basically, it is figuring out the best way to learn about a particular subject. You can contrast this to the default mode of education, which is relying on someone else to teach you a subject.
As a society in general we have adopted the idea that attending institutions of education is the correct way to learn about a subject. And while schooling can be important and good, it is often not the best method of acquiring useful actionable knowledge.
A wise man by the name of Herbert Spencer, who as an English philosopher in the mid to late 1800s, once said
“The great aim of education is not knowledge but action.”
Now obviously I’m not knocking the idea of learning a subject matter through someone else’s teaching. I make part of my living teaching, and perhaps the reason why you are reading this blog is because you expect to learn something.
My point is simply that the most efficient way to learn something that you will actually put into action is to decide what must be learned and how to learn it yourself, rather than taking a complete prescription from someone else. Someone else may be able to break down subject matter and assist in your learning, but you ultimately are responsible for your own education.
Take a moment and say that with me, because I think it is so important.
“I am responsible for my own education.”
It is quite an empowering phrase. When you really let it sink in, you begin to realize that no one can give you a grade, but yourself. (And I don’t mean this in the "all kids are special and everyone tries so it is not fair to give some kids As and Fs and lower their self esteem" kind of way.)
I mean this in the sense that it doesn’t matter if you got straight As and a perfect 4.0 GPA in college, you ultimately have to decide if you learned something, or if you just did the work.
It is only when you take ownership for the learning process and its result that you actually are able to accomplish the true goal of learning, which is the ability to put knowledge into action.
Why learning to learn is important
Have you ever considered how expensive education is? Is there some magic formula that a college or university has that gives them the ability to define and bestow an education better than you could do yourself?
When you consider the amount of money and time that is spent on traditional classroom education, you really have to ask the question of whether or not you are getting the maximum benefit for your precious resources.
I think you’ll find that most of the time, the answer is “no.”
The problem with systematic education is that it isn’t very efficient. The process of learning something is very tailored to an individual. It is not something that is easily distilled and applied like a balm or an ointment to the foreheads of eager young students.
Not only do different people have different learning styles, but what is important for them to actually learn varies as well.
Let’s be completely honest here, in most formal educational systems the majority of what you do is read and regurgitate things, but not really learn them. Perhaps you remember them for long enough to take a test, or to graduate to the next level of that subject area, but do you really learn most of the things that are taught in a textbook? Do you really need to?
Overall, with traditional spoon-fed education, you are typically not really getting your value’s worth for your money or your time.
Still, I hope we can all agree that education is important.
And because education is so important and we don’t want to waste our money or our time acquiring it, it is essential to learn how to learn.
Equipped with the ability to teach yourself anything you need to know, you suddenly lose the constraints that are binding you to a particular area of knowledge or skillset.
When you can teach yourself more efficiently about a subject than any institution can, you have given yourself perhaps one of the most valuable gifts a human being can receive to be successful in life.
You have given yourself the ability to do just about anything you want. (Within the constraints of time-space and physical reality of course.)
And once you have this ability yourself, you will also find that you will be in a great position to teach others what you know.
Since most of the world is not very good at this skill, you have a genuine value that is in short supply. If you can take a subject matter, figure out a path to learn that subject matter, and be successful in doing so, you can help others along the way who may not yet have mastered that ability as well as you have.
How to do it
All this talk about the value of learning to learn is worthless if we don’t actually learn how to learn how to learn. (Say that three times fast.)
Rather than title this subject as accurately as I could put it, which would be to learn how to learn how to learn, I decided "how to do it" approximates closely enough my point.
Enough blabber, let’s get down to it.
Scoping the subject
The first step in learning about a subject and perhaps the most critical is to determine the scope of the subject you want to learn about.
So many people skip this step and wander aimlessly though the vast halls of knowledge never really knowing what they are looking for.
In determining the scope of the subject you want to learn about it is very important to consider first the granularity. The granularity at which you wish to learn a subject will very greatly influence the size or overall scope of the subject matter to be digested.
Speaking plainly… you can’t lean a lot about a large subject in detail. (At least not in a practical amount of time.)
You basically have to balance the details of the subject to the overall size of what you want to learn.
For example, since this is a programming blog, let’s say you want to learn about a particular technology. Let’s say C#.
You could learn about the topic with a broad brush and learn the basics of the language and how to generally construct logical statements and write programs in that language.
You could also decide that you want to learn exactly how C# works and how exactly each keyword behaves under certain circumstances. This level of detail can of course be found in the C# language specification. (If you didn’t actually click that link, it takes you to a 505 page book with almost all of the technical details of the C# language.)
And while you could of course learn the language at this level of detail, it would probably be a lot more beneficial to pick a particular aspect of the C# language to learn about at this detail based on why you want to learn it rather than attempt to understand every aspect of every situation of the C# language.
Taking another simpler example. If you wanted to learn about world history, you are either going to want to learn about the entire history of the world at a very high and summarized level, or you are going to want to pick a particular era and location.
Having a goal
The next thing you need is a goal. There is no point in learning something just for the sake of learning it.
Your goal might be to build something with the new technology or to be able to write about it competently or even just to be able to speak fluently on the subject matter.
I’d encourage you though, in choosing a goal, to make sure that your goal is something that can be measured and qualified in no uncertain terms. If you are learning a new technology, make a goal of building something with it. Even if it is something that will be thrown away after it is built. It will both serve to reinforce what you have learned and to validate the subject matter and scope you have chosen.
Another important goal I always try to have is to teach whatever I am learning. I have found that the only way to truly learn something (and by this I mean to have that true in-depth knowledge of a subject, one that does not fade with time) is to teach it.
Present at a local user group, write a blog post, tell your spouse about it. (My wife loves hearing about programming languages and technology. Sometimes she’ll even drop what she is doing just to make sure she is paying full attention and not missing one intricate little detail about all the exciting things I am telling her.)
When you define a goal, it is also important to define a deadline. Doing this will help you refine the goal and recheck the scope of your subject.
It does no good to learn something without the ability to practically apply it. By having an actual deadline, you ensure that what you are trying to accomplish will fit into the timeline which will be required to make it useful to you.
The important point is to have at least some goal for your learning endeavor.
After you know what you are going to learn and you have a good idea of how you will measure your achievement of the learning, you will undoubtedly need to find some resources for proceeding with your plan.
At this step, you’ll also want to start creating an outline or mind map or some other way of organizing exactly what things you decided to learn about when you defined your scope. I’ll talk more on that in a moment.
Depending on the subject matter you are trying to learn there may be a large amount of resources available or very few.
Usually, the best way to get started on finding resources is a search on the internet.
Often we are trained to only turn to one type of medium as a resource for learning when there are so many more. Consider all the types of resources that may be available on a subject:
- Magazine articles
- Field experts
- Other who are also looking to learn the subject and may have already gathered resources together
As you are compiling the resources you may draw upon, you should also be looking to figure out how others have taught the subject you are attempting to learn.
I often will look through the tables of contents of three to four books on a subject I am trying to learn and draw my own outline of how I will cover the material from the overall picture I get from how others have broken down the topic before.
Another great source is to look at actual college courses or other courses on the subject and see how the material is broken down there.
Sometimes you’ll find though that just asking someone knowledgeable about the subject will be your best avenue.
The end of this step should result in an actionable plan that outlines what you are going to cover and how you are going to cover it along with a general idea of the resources you will use to do so.
Putting it into practice
I’ve found the most effective way to actually learn something once I know what I am going to learn and where I am going to get the information from is to study and do at almost the same time.
Now “do” can be a very broad term when it comes to learning, so you’ll have to decide for yourself what exactly this constitutes.
If I am learning a new programming language, or framework, I’ll try to actually creating demos of what I am learning, by working through my own examples.
At the same time, I may be “doing” by reorganizing information either to prepare a talk or course on the subject that I will be teaching. By attempting to take the information I am getting and restructure it in a simpler way that I can explain to someone else, I am forcing myself to undergo the process of learning instead of just reading.
Albert Einstein is quoted as saying
If you can’t explain it simply, you don’t understand it well enough.
So if you want to understand something “well enough,” work from the goal of being to explain it simply.
So much more
In a short blog post, I can’t cover everything there is to know or that I have found to be true about learning to learn.
A whole volume of books could easily be written on the subject, but what I have outlined are the basics of what I generally do to learn something quickly and effectively.
I wanted to touch on very briefly some other aspects of this subject that I have thought about, but not covered as thoroughly in this post.
One excellent technique for learning something is to immerse yourself in it. If you really want to learn a programming language, start doing everything in that language.
If you want to learn to use keyboard shortcuts instead of mouse clicks, try taking away or limiting your mouse use for some period of time.
Immersion is a somewhat painful, but effective and fast way to learn new material.
Pair programming with newbies is an excellent example of immersion. Let them jump right in and start coding with someone who knows about the system.
Many foreign language classes also use this technique by forcing students to only speak in the language they are learning when in class.
Try and fail
While I think your aim should not be to learn knowledge by trying and failing, it is a great source of wisdom.
Let me clearly define the difference between the two, before I move on.
Knowledge is what you know that can be put into words and consists primarily of facts.
Wisdom is akin to common sense. It is often not able to be put into words and cannot be fact checked for accuracy because it is a set of principles that rule your behavior and thinking.
You really shouldn’t try to learn something the painful way if you can just find out the answer to something by asking someone or looking up the information.
(Don’t try and pass a multiple choice test by try and fail.)
On the other hand, go ahead with the imperfect knowledge that you have and try to apply it to something; if you fail figure out why. This process will produce valuable learning.
In short, learning through try and fail can be good, but only when it teaches us lessons that we couldn’t learn otherwise. There is a big difference between educated failing and fumbling your way through life unprepared.
It is not necessary to learn that a stove is hot by touching it, but the best way to learn to start a business is probably to fail at one first.
Everything I’ve outlined so far, has been on the basis of acquiring general knowledge on a subject, not about becoming better at an art or skill.
What I mean by this is there is a difference between being an expert golfer and knowing a great deal about the proper golf technique.
Often a prerequisite for skills mastery is the acquisition of a large amount of knowledge on a subject, but having a large amount of knowledge on a subject does not an expert make.
The same goes for being a better programmer.
You could learn 10 different programming languages and 20 different technologies and frameworks, but simply having all this knowledge doesn’t mean you are good at applying it.
The old adage that practice makes perfect is appropriate in this situation.
There simply is no substitute for experience. And experience is obtained through practice over time. (Although, at the same time practicing without the proper knowledge in place can put you in a worse position than not practicing at all. Ever heard of someone having to un-learn their golf swing?)
For more on this topic check out the Dryfus model of skill acquisition.
Changing your thinking
The key to self educating is to be able to change the way you think about learning. You should no longer see yourself as a student to be taught, but rather as a researcher gathering together information on a subject.
This way of thinking about education tends to go against what many of us have been taught by formalized education systems.
It takes a bit of courage to step forward and proclaim yourself as your own best educator, but the rewards of doing so are immense.