The History of the Internet Part One: Before the Web
The Internet is arguably the most important invention in the last 100 years. It has changed our lives enormously, and every year we see new innovations that can have a positive impact on our lives.
The Internet is everywhere today, so much so that it is becoming harder to remember life being any different. This series will take you back in time, back to the 1960s where it all began.
From there, we will move through the years and see how the Internet has grown from its humble beginnings, all the way up to today’s connected world of 2.5 billion users and $200 billion spent on advertising each year.
As you read this, picture yourself living in each year as it is described. You’ll be amazed by how much things have changed.
This is a five-part series, each part covering a separate period in our history. This first part looks at the development of the Internet before the World Wide Web.
Part two covers the early years of the World Wide Web, part three covers the first browser war, part four covers web 2.0, and the final part of the series will explore virtual reality and beyond.
Welcome to the year 1962. Telephones have been around for many years, and the number of homes in the United States that own one has risen all the way up to 80%.
Various early forms of computers have also been invented over the last few decades. They are very large and extremely expensive. It is certainly not feasible to have a computer in the home.
All voice and data communication uses circuit switching. Every telephone call is allocated a dedicated, end-to-end, electronic connection between each station.
Some special-purpose machines are linked together into a network, but different machines cannot be linked together.
This year, there is an important invention by AT&T Bell Laboratories: the Transmission System 1 (T1), which is capable of transmitting 24 telephone calls simultaneously over a single copper wire.
Even more importantly, a man working at MIT has a very bold and, perhaps, even crazy idea. His name is Joseph Carl Robnett Licklider, and he writes a series of memos about building an “Intergalactic Network,” where everyone on the globe is interconnected and can access programs and data at any site from anywhere.
This October, the Cuban Missile crisis occurs. The world is very afraid that the Cold War will ignite at any second and that the outbreak of nuclear war will cause swift global annihilation. Also this month, Mr. Licklider joins the Advanced Research Projects Agency.
Leonard Kleinrock of MIT publishes the book Communication Nets: Stochastic Message Flow and Delay, which is the first book to comprehensively analyse and present a radically different way to send data over a network, namely by chopping up the message and sending it in small pieces.
Also this year the engineer Paul Baran, working for RAND Group, writes “On Distributed Communications Networks,” which applies Kleinrock’s theory for secure voice communications in the military.
To defend American computers from a possible Soviet attack, Paul is working on a computer design with built-in redundancy, mimicking how the human brain can recover from injuries by bypassing a damaged region.
The Cold War continues, and the Space Race leads to Neil Armstrong and Buzz Aldrin becoming the first men on the moon.
A far less publicized, but perhaps no less important, project under development in the United States is happening at the Department of Defense’s Advanced Research Projects Agency.
It is planning to build a computer network called ARPANET.
Much of this plan is based on the work of Paul Baran who has designed a “survivable” communications system that could maintain communication between end points in the face of damage from a nuclear attack.
The main innovation is called “distributed adaptive message block switching” which works very differently to circuit switching.
Packets of data are transmitted across wires, and data may be shared by multiple simultaneous sessions. This increases network efficiency and robustness.
Data consists of a header, used by networking hardware to direct the packet to its destination, and a payload, which is extracted and used at the destination.
ARPANET consists of four computers, and is built by the technology company Bolt, Beranek and Newman.
The routing is done by four of these Internet Message Processors:
On October 29th, the team is ready to send the first message: “login.”
The “l” is successfully transmitted!
The “o” is successfully transmitted!
And then the system crashes. Nevertheless, this is the beginning of a new computing era.
Welsh computer scientist and computer network expert Donald Davies coins the term packet switching for the new technology. This rolls off the tongue rather more easily than ‘distributed adaptive message block switching’ and the new term proves to be more popular.
Also this year, philosopher Marshall McLuhan predicts a global village “where everyone sticks their noses into other people’s business.”
The 30-year-old computer engineer Ray Tomlinson is asked by his superiors at Bolt, Beranek and Newman to change a program called SNDMSG, which sends messages to other users of a time-sharing computer.
He decides to add code so that SNDMSG can also send these messages to other computers, and he uses the @ symbol to represent the destination. This is the first ever electronic mail message.
He says to his colleague, “Don't tell anyone! This isn't what we're supposed to be working on.”
Over on the other side of the Atlantic, in France, the Institut de Recherche en lnformatique et en Automatique (IRIA) creates the CYCLADES network.
This network makes the hosts responsible for the delivery of data rather than the network itself.
Also this year, a study by ARPANET finds that 75% of all traffic on ARPANET is e-mail, so it looks like Ray was onto something.
The success of the CYCLADES research project is an inspiration for further work on reliable network designs.
Bob Kahn and Vint Cerf publish “A Protocol for Packet Network Interconnection” in the May 1974 issue of IEEE Transactions on Communications Technology. This paper describes a common Transport Control Protocol (TCP), which hides the differences between other network protocols.
By the end of this year, we have a 70-page TCP Specification which you can find here: https://tools.ietf.org/html/rfc675.
At the beginning of this year, two young men from California receive funding and business expertise from entrepreneur Mike Markkula, resulting in the incorporation of a new computer company called Apple Computer.
Three months on, they introduce their new computer, the Apple II, at the first West Coast Computer Faire, and it shows some early signs of success.
Meanwhile, work continues on the development of better networking protocols, and Jon Postel, one of the original ARPANET developers, is helping out by working as a Request for Comments editor.
He sees a problem with the current approach that is being taken and writes, “We are screwing up in our design of internet protocols by violating the principle of layering.”
He argues that there should be two protocols:
- TCP – for the end-to-end control of the conversation
- IP – for the hop-by-hop relaying of each message
The Basics of TCP/IP
We have witnessed the birth of TCP/IP, a technology which has revolutionised communication across computer networks. How does it work?
First, every computer has a copy of the TCP/IP program.
TCP then splits the message into smaller packets.
IP handles the address part of each packet, so it gets to the right destination.
At the destination, TCP reassembles the packets back into the original message.
With the assistance of the University of North Carolina, Jim Ellis and Tom Truscott — both graduates of Duke University — establish Usenet and connect it to ARPANET.
Users read and post messages (called articles or posts) to one or more categories known as newsgroups.
Words such as FAQ, flame, and spam originate from Usenet.
Also this year, Jon Postel writes the specification for a new User Datagram Protocol. UDP takes a very different approach from TCP/IP for transmitting data, and is designed for speed over reliability.
In this specification, he explains that it “…provides a procedure for application programs to send messages to other programs with a minimum of protocol mechanism. The protocol is transaction oriented, and delivery and duplicate protection are not guaranteed.”
Time-sensitive applications often use UDP because dropping packets is preferable to waiting for delayed packets, which may not be an option in a real-time system.
Computer engineers have long been aware of the need for more standardization in computing. When all computer systems conform to agreed standards, they can communicate with each other without regard to their underlying internal structure and technology.
Two standard bodies, the International Organization for Standardization and the International Telegraph and Telephone Consultative Committee, developed documentation defining similar networking models.
Works begins to merge these documents into The Basic Reference Model for Open Systems Interconnection. This model, known as the OSI model for short, describes seven layers that all computers communicate over:
- Physical – deals with the individual and indivisibly small bits of data. An electrical impulse, light, or radio signal will travel across this layer.
- Data Link – relays data frames reliably.
- Network – works with packets of data by structuring and managing a multi-node network.
- Transport – transmits TCP segments and UDP datagrams between points on a network.
- Session – manages a continuous exchange of information with multiple back-and-forth transmissions.
- Presentation – translates data between a networking service and an application. For example, the American Standard Code for Information Interchange (ASCII) operates at the presentation layer.
- Application – allows for high-level application programming interfaces.
Domain Name System
Later this year, Paul Mockapetris and Jon Postel invent the Domain Name System (DNS).
DNS is a system for naming computers and network services that is organized into a hierarchy of domains.
DNS Servers translate addresses from computer numbers to human readable addresses and vice versa.
An example of a DNS Server is WHOIS, which allows us to view the contact information for any website.
Here, you can read a detailed history of ICANN WHOIS.
National Science Foundation Network
The National Science Foundation (NSF), a United States government agency that supports research and education, begins a new project called NSFNET for the promotion of advanced research and education with the aid of supercomputer systems.
The National Science Foundation funds the creation of five supercomputing centers, and the NSFNET project connects them together using TCP/IP.
By the end of this year, there are 2,000 computers on the Internet.
AT&T’s Transmission System technology from 1962 is used to upgrade the National Science Foundation’s Network to a 1.5 Mbit/s connection.
Robert Morris, who is the son of the NSA cryptographer known by the same name, writes 99 lines of code that exploits several known software vulnerabilities. The code is not virus; it does not attach itself to any existing programs or modify existing files. It is a worm that is written to spread to as many computers as possible.
According to Morris, the purpose of this program was just to gauge the size of the Internet.
The program asks each computer whether it already has a copy of itself. Morris is aware that other computers could not be infected if they are programmed to respond that they already had a copy of the worm.
To ensure the propagation of the program, Morris programs it to duplicate itself every seventh time it receives a “yes” response.
In doing this, he grossly underestimates the number of times a computer would be asked the question, and the magnitude of the effect that this simple code change would have.
Morris releases it at MIT, and it spreads rapidly, with a high rate of reinfection.
This is the first Internet worm, and it goes on to infect 10% of the Internet (6000 hosts), with many computers ceasing to function, unable to cope with the number of copies of the worm that are uploaded onto them.
Due to this crisis, and the realization that even worse security threats could emerge in the future, the first Computer Emergency Response Team (CERT) Coordination Center is created.
We have seen huge progress made in computer network technology. From severe technical challenges in getting four computers to communicate with each other, technology has evolved to the point where much of the USA and many other countries around the world are connected together.
Although the progress we have seen has been significant, in the next part we will see the birth of a new technology that accelerates the evolution even faster. We will explore the first years of the World Wide Web.