The Internet has been called many things, from “a series to tubes”1 to “a vast uncataloged library”2, from the information superhighway to the new “opiate of the masses”3.  What now enters homes through a wire, physically indistinct from power and other kinds of services, at once delivers base entertainment and provides the means for millions of livelihoods.  It has grown ubiquitous very fast; our daily lives likely involve some very alien practices and behaviors if viewed through the eyes of someone living 50 years ago, and yet the precursors of today’s Internet were alive and well in 1970.

In late 1962 the Cold War looked like it was about to get much hotter; the US discovered that the Soviets were building a missile installation in Cuba.  MAD, short for mutually assured destruction, had become a household term and the title of a popular satirical publication (Wills, 2017). At the Rand corporation, a young researcher named Paul Baran was thinking about telecommunications in the nuclear age, and wondered if he could improve the ‘survivability’ of the American communications infrastructure.  Though lampooned in Dr. Strangelove, the idea that a preemptive nuclear strike might disable the command and control mechanisms of a nation before it was able to retaliate was very scary to political and military powers at the time, and Baran new this. (Kubrick, 1964) (Baran, 1964)

Meanwhile, the United Kingdom was afraid that they were experiencing a “brain drain”, particularly in the area of computer technology.  In response, the government tried to promote technological research and development while at the same time pushing industry toward consumer goods (Abbate, 2000, p.22). In this setting, a young Brit named Donald Davies was working along similar lines to Baran, but with a vastly different motivation: where Baran wanted ‘survivability’, Davies wanted ‘interactivity’.  It was hoped that pursuing interactivity would produce commercially viable products. (Abbate, 2000)

Conditions were improving for founding what was to eventually become the Internet.  Two other pieces fell into place which lead to the inception of the first packet-switched network in the form of the ARPANET.  The first was the appointment to head the project under the Advanced Research Projects Agency (ARPA) of Lawrence Roberts (Port, 2004).  The second piece consisted of a series of decisions that with the benefit of hindsight we now view as critical. In 1966, when Roberts joined ARPA, there was an effort to build a network already underway, but many choices about how it should be built had yet to be made.  Under Roberts’ leadership, ARPANET was up and running in less than two years, using what at the time was a very new idea: the division of tasks into layers - what CS students today would recognize as a technology stack. The ARPANET was based on a two-level stack, such that locations communicated between each other via an ‘Interface Message Processor’ (IMP) which served as the ‘Communications’ layer, while internal communication at each site was handled by the ‘Host’ layer. (Abbate, 2000, p.67)

The biggest barrier to this setup was that the IMPs were highly specialized, experimental computers which required highly specialized software.  Bolt, Beranek and Newman (BBN) of Boston won the APRA contract to build the required IMPs and by the end of 1969 they had successfully installed them in the four original ARPANET locations: UCLA, UC Santa Barbara, the University of Utah and the Stanford Research Institute (Scientific American, 2004).  By 1972, the ARPANET connected 15 sites across the United States, yet it was still little more than an experiment. This seeming impotence provoked Roberts and Robert Kahn - best known for his role in developing TCP and being a founder and manager of Xerox’ PARC - to organize the first International Conference on Computing Communications (ICCC), to be held that year in Washington DC (Aspray and Ceruzzi, 2008, p.11).  The first ICCC, was intended to show off the ARPANET, and showcased a diverse set of network applications ranging from a connection to Paris, to a remote therapy session with Joseph Weizenbaum’s Eliza, and an air-traffic control simulator (Abbate, 2000, p.78). The eye-candy served to underscore the enormous benefits a wide area network could provide many sectors, not just computer scientists. Kahn later wrote that it was “..the watershed event that made people suddenly realize that packet switching was a real technology.” (Abbate, 2000, p.79)  Packet switching had passed its first major hurdle to widespread adoption.

Having proved its worth in the eyes of the engineering, telecommunications and computer communities, it remained to see what exactly people would do with it.  Roberts originally promoted ARPANET as a way to share resources (Abbate, 2000, p.96). He envisioned a landscape of specialized computing centers acting as boutiques, where someone wanting something done would remotely access the site with the desired ability, and run their tasks there.  Instead, it was discovered that while this happened to some extent, the network largely came to be used in unintended ways. For instance, when early network analysts saw that the MIT IMP was handling a lot of traffic, but barely sending anything out over the ARPANET, they thought something was wrong.  Instead, the MIT community had begun to use their IMP as a router for their local assets, closely mirroring contemporary network topology, and giving a new meaning to the concept of a ‘subnet’ (Abbate, 2000, p.94). However the rise of electronic mail, or email, would be the biggest unintended outcome of the ARPANET by far: it was the original ‘killer app’ (Abbate, 2000, p.109, endnote 19).

ARPANET wasn’t the only network in town.  BBN had spun off a version of ARPANET they called TELNET which they operated commercially and which became generally available in 1971 (Ruttan, 2006).  By 1975, ARPA funded projects in radio and satellite based networking had produced a packet-radio network in Hawaii in the form of ALOHANET, and similar satellite-based techniques were coming together to form SATNET.  An ALOHANET like packet-radio based network called PRNET was also being set up around San Francisco bay; a ruggedized version of ALOHANET with military applications in mind. By the mid 1970s, ARPA alone was running SATNET, ARPANET and PRNET, which each used different protocols, operated at different frequencies, and were built to connect different types of assets (Cerf, 1982).  This was not only annoying, but from the military’s perspective, dangerous: advances in communications were useless if it produced a fragmented array of incompatible systems. And this wasn’t just an American problem. In Europe there also existed disparate networks - the British National Physical Laboratory (NPL) network being perhaps the best known and most mature - many telecoms were planning their own packet-switching networks, and the concern and excitement over how to interconnect them was as powerful as in the US. (Abbate, 2000)

In the summer of 1973, Kahn and Vinton Cerf - who, like Kahn, would become best known for his role in developing TCP - wrote a paper in which they outlined their motivations and general ideas on how to ‘interconnect’ various networks (Cerf and Kahn, 1974).  By 1976 they had ARPA funding, and setout to build an actual system. Perceptively, Cerf and Kahn had begun this work by forming the International Network Working Group (INWG), chaired by Cerf, the goal of which was to collaborate with the world’s networking experts on how to best achieve interconnectivity.  The French had been working on a project called CYCLADES - named after the Greek series of islands, which like an interconnected group of networks, together form an archipelago - designed from the ground up to support interconnectivity. The British had been working for some time on the problem too. Both the French and British experiences suggested that the basic design of an interconnection system should be one where the network, or communications protocol was simple, and where the host protocol did the heavy lifting.  They also agreed that the best way to move forward was to establish a single universal protocol. (Abbate, 2000, pp.123-127)

At the same time Kahn and Cerf were starting to push for the interconnectivity of networks, the networks themselves were becoming much faster and more versatile.  ALOHANET primarily contributed the idea of using packet acknowledgement and retransmission as a method for collision recovery. A collision occurs when two packets traveling over the same medium - in this instance a radio frequency - 'collide', causing both packets to be lost.  When the Hawaiian team started work on ALOHANET in 1970, a single broadcast channel was used in a managed fashion, such that a host attempting to send data could be certain it was the only one using the channel; but this was highly centralized, and slow. They instead allowed any host that had data ready for transmission to send the data.  A host receiving a packet would acknowledge its arrival, which provided the sender with a way to recognize if a collision had occurred (Cerf, 1982). This was a major advance in host layer communication, and though heralded as such, it would need to be refined before it was ready to be the basis for the heavy lifting that was needed for interconnectivity.

A particular problem the Hawaiian team encountered, was what to do when two hosts attempted to use the same channel to send data.  The lack of either host receiving an acknowledgement caused them to both retransmit. Given similar hosts, they often kept retransmitting at uniform intervals, which resulted in endless collisions.  ALOHANET's solution was to resend after a randomly chosen interval. (Abbate, 2000, p.114)

Meanwhile, a young Robert Metcalfe was looking for theoretical networking research as part of his graduate work at Harvard, and through an ARPA acquaintance, got ahold of the ALOHANET papers.  Metcalfe realized that by modifying the retransmission interval in accordance with the network's load, significant speed could be gained. Two years later, while working at Xerox PARC, he was tasked with developing a local network to connect some recently built workstations.  Drawing on his background, he designed an ALOHANET like 'everyone broadcasts randomly' system, but used wires instead of radio. His network, which was several orders of magnitude faster than its radio-based counterpart, came to be known as Ethernet, and is still the most popular method for connecting hosts in close proximity.  (Abbate, 2000, pp.116-118) With the development of Ethernet, the stage was set to develop a true interconnective network.

Vinton Cerf, Gerard Lelann [of CYCLADES], and Robert Metcalfe collaborated closely on the specifications for TCP, and thus the protocol reflected the design philosophies of Cyclades and Ethernet while deviating significantly from the approach that had been taken with the ARPANET.” (Abbate, 2000, p.127)

The result of the INWG effort was an open protocol for networking called the ‘transmission control protocol’, or TCP.  By 1977, TCP had been implemented at enough different sites, by BBN in Boston, and at UCLA, that it was ready for a demonstration.  UCLA researchers managed to use TCP to transmit data back and forth between California and Europe. The communication passed through PRNET, ARPANET and SATNET; the data was sent via radio, satelite and by wire, in a single form. (Abbate, 2000, pp.127-130)  An operable, useful ‘Internet’ had been formed.

In 1978 TCP was separated into two layers, collectively known as TCP/IP.  IP stands for Internet Protocol and is responsible only for moving packets from networked machine to networked machine.4  TCP handles the quality assurance of the connection, ensuring that errors are corrected for, and that order is preserved (Abbate, 2000, p.130).  This formulation of TCP/IP, with only minor modification, still forms the core of the Internet. Though there were other protocols for interconnection being discussed, ARPA galvanized the adoption of TCP/IP when they enforced its adoption at all ARPANET sites in 1983 and by the mid 1990s TCP had established itself as the protocol of the Internet (Aspray and Ceruzzi, 2008, p.31).

In 1985, the US Department of Defense created a dedicated, ARPANET-like network for the military called MILNET.  This move also shifted ARPANET administration from ARPA to the National Science Foundation (NSF), freeing ARPANET from military oversight while keeping the network securely in the hands of the government.  In the same year, the NSF began its own networking project called NSFNET (Aspray and Ceruzzi, 2008, pp.24-25). NSFNET was based on the same open principles that ARPANET was, particularly on the TCP/IP protocol suite, but was designed with a more scalable, two-tiered architecture, and was from its inception an internet, not just a network (Abbate, 2000, p.191).  The two tiers resembled a road system where massive highways get traffic from point to distant point very fast, but are too expensive and large for most towns and cities to maintain. Instead, slightly smaller, slower roads - what might be considered ‘state’ roads in the US - connect the highways to towns, wherein traffic goes more slowly, but to very specific places.  NSFNET’s ‘backbone’ formed its highways, while regional networks formed its state roads and locational networks like university campuses and research institutes formed its towns and town roads.

The development of NSFNET, and the infrastructure that the NSF helped to design and build forms the basis for today’s Internet, but also spelled the end of ARPANET (Abbate, 2000, p.194).  The network that demonstrated the validity of packet-switching, first demonstrated the benefits of a layered design and which, most importantly, was the test-bed for TCP/IP ceased to exist on February 28th, 1990, its nodes having been moved to other, more inclusive networks like NSFNET (Abbate, 2000, p.195).


  1. Kliff, Sarah The Internet is, in fact, a series of tubes. Washington Post. Sept. 2011

  2.  Herring, Mark 10 Reasons Why the Internet Is No Substitute for a Library. American Libraries, April 2001

  3.  Dvorak, John The Internet is the Opiate of the Masses. PC Magazine. March 2011

  4.  Networked machine or gateway


  • Abbate, J. (2000). Inventing The Internet. MIT Press.
  • Aspray, W. and Ceruzzi, P. (2008). The Internet and American Business. Cambridge, Mass.: MIT Press.
  • Baran, P. (1964). On Distributed Communication Networks. IEEE Transactions Of The Professional Technical Group On Communications, CS-12(1).
  • Cerf, S., “Packet Satelite Technology Reference Sources”, RFC 829, November 1982.
  • Cerf, V. and Kahn, R. (1974). A Protocol for Packet Network Intercommunication. IEEE Transactions on Communications, 22(5), pp.637-648.
  • KUBRICK, S., SOUTHERN, T., GEORGE, P., SELLERS, P., SCOTT, G. C., HAYDEN, S., WYNN, K., PICKENS, S., JOHNSON, L., & BRYANT, P. (1964). Dr. Strangelove, or, How I learned to stop worrying and love the bomb. Culver City, Calif, Columbia TriStar Home Entertainment.
  • Port, Otis. (2004). Larry Roberts: He Made The Net Work. [online] Available at: [Accessed 29 May 2018].
  • Ruttan, V. (2006). Is War Necessary for Economic Growth?: Military Procurement and Technology Development. Oxford: Oxford University Press.
  • Scientific American. (2009). Early sketch of ARPANET's first four nodes. [online] Available at: [Accessed 29 May 2018].
  • Sommerlad, J. (2018). This is how America revived Europe after the Second World War. [online] The Independent. Available at: [Accessed 29 May 2018].
  • Wills, M. (2017). How Mad Magazine Informed America's Cultural Critique | JSTOR Daily. [online] JSTOR Daily. Available at: [Accessed 29 May 2018].
# Reads: 1470