SUPPLEMENT ON INFORMATION AND COMMUNICATION TECHNOLOGY

 

 

Chapter 

 

3

 


Internet History and Development

Introduction

The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location.

The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. Today, terms like "bleiner@computer.org" and "http://www.acm.org" trip lightly off the tongue of the random person on the street. 1

This is intended to be a brief, necessarily cursory and incomplete history. Much material currently exists about the Internet, covering history, technology, and usage. A trip to almost any bookstore will find shelves of material written about the Internet. 2In this paper, 3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies), and where current research continues to expand the horizons of the infrastructure along several dimensions, such as scale, performance, and higher level functionality. There is the operations and management aspect of a global and complex operational infrastructure. There is the social aspect, which resulted in a broad community of Internauts working together to create and evolve the technology. And there is the commercialization aspect, resulting in an extremely effective transition of research results into a broadly deployed and available information infrastructure.

The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Its history is complex and involves many aspects - technological, organizational, and community. And its influence reaches not only to the technical fields of computer communications but throughout society as we move toward increasing use of online tools to accomplish electronic commerce, information acquisition, and community operations.

 

Origins of the Internet

The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing his "Galactic Network" concept. He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. Licklider was the first head of the computer research program at DARPA, 4 starting in October 1962. While at DARPA he convinced his successors at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.

Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path towards computer networking. The other key step was to make the computers talk together. To explore this, in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in Mass. to the Q-32 in California with a low speed dial-up telephone line creating the first (however small) wide-area computer network ever built. The result of this experiment was the realization that the time-shared computers could work well together, running programs and retrieving data as necessary on the remote machine, but that the circuit switched telephone system was totally inadequate for the job. Kleinrock's conviction of the need for packet switching was confirmed.

In late 1966 Roberts went to DARPA to develop the computer network concept and quickly put together his plan for the "ARPANET", publishing it in 1967. At the conference where he presented the paper, there was also a paper on a packet network concept from the UK by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL work as well as that of Paul Baran and others at RAND. The RAND group had written a paper on packet switching networks for secure voice in the military in 1964. It happened that the work at MIT (1961-1967), at RAND (1962-1965), and at NPL (1964-1967) had all proceeded in parallel without any of the researchers knowing about the other work. The word "packet" was adopted from the work at NPL and the proposed line speed to be used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps. 5

In August 1968, after Roberts and the DARPA funded community had refined the overall structure and specifications for the ARPANET, an RFQ was released by DARPA for the development of one of the key components, the packet switches called Interface Message Processors (IMP's). The RFQ was won in December 1968 by a group headed by Frank Heart at Bolt Beranek and Newman (BBN). As the BBN team worked on the IMP's with Bob Kahn playing a major role in the overall ARPANET architectural design, the network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the network measurement system was prepared by Kleinrock's team at UCLA. 6

Due to Kleinrock's early development of packet switching theory and his focus on analysis, design and measurement, his Network Measurement Center at UCLA was selected to be the first node on the ARPANET. All this came together in September 1969 when BBN installed the first IMP at UCLA and the first host computer was connected. Doug Engelbart's project on "Augmentation of Human Intellect" (which included NLS, an early hypertext system) at Stanford Research Institute (SRI) provided a second node. SRI supported the Network Information Center, led by Elizabeth (Jake) Feinler and including functions such as maintaining tables of host name to address mapping as well as a directory of the RFC's. One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock's laboratory to SRI. Two more nodes were added at UC Santa Barbara and University of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating methods of 3-D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues to this day.

Computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In December 1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971-1972, the network users finally could begin to develop applications.

In October 1972 Kahn organized a large, very successful demonstration of the ARPANET at the International Computer Communication Conference (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial "hot" application, electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send and read software, motivated by the need of the ARPANET developers for an easy coordination mechanism. In July, Roberts expanded its utility by writing the first email utility program to list, selectively read, file, forward, and respond to messages. From there email took off as the largest network application for over a decade. This was a harbinger of the kind of activity we see on the World Wide Web today, namely, the enormous growth of all kinds of "people-to-people" traffic.

  

The Initial Internetting Concepts

The original ARPANET grew into the Internet. Internet was based on the idea that there would be multiple independent networks of rather arbitrary design, beginning with the ARPANET as the pioneering packet switching network, but soon to include packet satellite networks, ground-based packet radio networks and other networks. The Internet as we now know it embodies a key underlying technical idea, namely that of open architecture networking. In this approach, the choice of any individual network technology was not dictated by a particular network architecture but rather could be selected freely by a provider and made to interwork with the other networks through a meta-level "Internetworking Architecture". Up until that time there was only one general method for federating networks. This was the traditional circuit switching method where networks would interconnect at the circuit level, passing individual bits on a synchronous basis along a portion of an end-to-end circuit between a pair of end locations. Recall that Kleinrock had shown in 1961 that packet switching was a more efficient switching method. Along with packet switching, special purpose interconnection arrangements between networks were another possibility. While there were other limited ways to interconnect different networks, they required that one be used as a component of the other, rather than acting as a peer of the other in offering end-to-end service.

In an open-architecture network, the individual networks may be separately designed and developed and each may have its own unique interface which it may offer to users and/or other providers. including other Internet providers. Each network can be designed in accordance with the specific environment and user requirements of that network. There are generally no constraints on the types of network that can be included or on their geographic scope, although certain pragmatic considerations will dictate what makes sense to offer.

The idea of open-architecture networking was first introduced by Kahn shortly after having arrived at DARPA in 1972. This work was originally part of the packet radio program, but subsequently became a separate program in its own right. At the time, the program was called "Internetting". Key to making the packet radio system work was a reliable end-end protocol that could maintain effective communication in the face of jamming and other radio interference, or withstand intermittent blackout such as caused by being in a tunnel or blocked by the local terrain. Kahn first contemplated developing a protocol local only to the packet radio network, since that would avoid having to deal with the multitude of different operating systems, and continuing to use NCP.

However, NCP did not have the ability to address networks (and machines) further downstream than a destination IMP on the ARPANET and thus some change to NCP would also be required. (The assumption was that the ARPANET was not changeable in this regard). NCP relied on ARPANET to provide end-to-end reliability. If any packets were lost, the protocol (and presumably any applications it supported) would come to a grinding halt. In this model NCP had no end-end host error control, since the ARPANET was to be the only network in existence and it would be so reliable that no error control would be required on the part of the hosts.

Thus, Kahn decided to develop a new version of the protocol, which could meet the needs of an open-architecture network environment. This protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to act like a device driver, the new protocol would be more like a communications protocol.

Four ground rules were critical to Kahn's early thinking:

  • Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the Internet.

  • Communications would be on a best effort basis. If a packet didn't make it to the final destination, it would shortly be retransmitted from the source.

  • Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.

  • There would be no global control at the operations level.

Other key issues that needed to be addressed were:

  • Algorithms to prevent lost packets from permanently disabling communications and enabling them to be successfully retransmitted from the source.

  • Providing for host to host "pipelining" so that multiple packets could be end route from source to destination at the discretion of the participating hosts, if the intermediate networks allowed it.

  • Gateway functions to allow it to forward packets appropriately. This included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc.

  • The need for end-end checksums, reassemble of packets from fragments and detection of duplicates, if any.

  • The need for global addressing

  • Techniques for host to host flow control.

  • Interfacing with the various operating systems

  • There were also other concerns, such as implementation efficiency, internetwork performance, but these were secondary considerations at first.

Kahn began work on a communications-oriented set of operating system principles while at BBN and documented some of his early thoughts in an internal BBN memorandum entitled "Communications Principles for Operating Systems". At this point he realized it would be necessary to learn the implementation details of each operating system to have a chance to embed any new protocols in an efficient way. Thus, in the spring of 1973, after starting the internetting effort, he asked Vint Cerf (then at Stanford) to work with him on the detailed design of the protocol. Cerf had been intimately involved in the original NCP design and development and already had the knowledge about interfacing to existing operating systems. So armed with Kahn's architectural approach to the communications side and with Cerf's NCP experience, they teamed up to spell out the details of what became TCP/IP.

The give and take was highly productive and the first written version 7 of the resulting approach was distributed at a special meeting of the International Network Working Group (INWG) which had been set up at a conference at Sussex University in September 1973. Cerf had been invited to chair this group and used the occasion to hold a meeting of INWG members who were heavily represented at the Sussex Conference.

Some basic approaches emerged from this collaboration between Kahn and Cerf:

  • Communication between two processes would logically consist of a very long stream of bytes (they called them octets). The position of any octet in the stream would be used to identify it.

  • Flow control would be done by using sliding windows and acknowledgments (acks). The destination could select when to acknowledge and each ack returned would be cumulative for all packets received to that point.

  • It was left open as to exactly how the source and destination would agree on the parameters of the windowing to be used. Defaults were used initially.

  • Although Ethernet was under development at Xerox PARC at that time, the proliferation of LANs were not envisioned at the time, much less PCs and workstations. The original model was national level networks like ARPANET of which only a relatively small number were expected to exist. Thus a 32 bit IP address was used of which the first 8 bits signified the network and the remaining 24 bits designated the host on that network. This assumption, that 256 networks would be sufficient for the foreseeable future, was clearly in need of reconsideration when LANs began to appear in the late 1970s.

The original Cerf/Kahn paper on the Internet described one protocol, called TCP, which provided all the transport and forwarding services in the Internet. Kahn had intended that the TCP protocol support a range of transport services, from the totally reliable sequenced delivery of data (virtual circuit model) to a datagram service in which the application made direct use of the underlying network service, which might imply occasional lost, corrupted or reordered packets.

However, the initial effort to implement TCP resulted in a version that only allowed for virtual circuits. This model worked fine for file transfer and remote login applications, but some of the early work on advanced network applications, in particular packet voice in the 1970s, made clear that in some cases packet losses should not be corrected by TCP, but should be left to the application to deal with. This led to a reorganization of the original TCP into two protocols, the simple IP which provided only for addressing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow control and recovery from lost packets. For those applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP.

A major initial motivation for both the ARPANET and the Internet was resource sharing - for example allowing users on the packet radio networks to access the time sharing systems attached to the ARPANET. Connecting the two together was far more economical that duplicating these very expensive computers. However, while file transfer and remote login (Telnet) were very important applications, electronic mail has probably had the most significant impact of the innovations from that era. Email provided a new model of how people could communicate with each other, and changed the nature of collaboration, first in the building of the Internet itself (as is discussed below) and later for much of society.

There were other applications proposed in the early days of the Internet, including packet based voice communication (the precursor of Internet telephony), various models of file and disk sharing, and early "worm" programs that showed the concept of agents (and, of course, viruses). A key concept of the Internet is that it was not designed for just one application, but as a general infrastructure on which new applications could be conceived, as illustrated later by the emergence of the World Wide Web. It is the general purpose nature of the service provided by TCP and IP that makes this possible.

 

HTTP Protocol : An Overview

HTTP (Hyper-Text Transmission Protocol) uses the Internet TCP/IP protocol stack. All information you read or write on the Web is sent across the Net in TCP/IP packets. A TCP connection is really like a responsible courier getting around in a big city (the Net) - it makes sure the data you send and receive reaches the final destination reliably while avoiding traffic jams and allowing other people to get through as well. The funny thing is that TCP drives an old car - it takes time for it to warm up, and as soon as it is done, it cools off again very quickly.

To function efficiently, HTTP must take advantage of TCP/IP's strengths and avoid its weaknesses, something that HTTP/1.0 does not do very well. Whenever a client accesses a document, an image, a sound bite etc. HTTP/1.0 creates a new TCP connection and as soon as it is done, it is immediately dismissed and never reused. As a result, TCP rarely has time to get warm leaving lots of "cold cars" with little data creating a lot of traffic jams.

HTTP/1.1 fixes this in two ways. First, it allows the client to reuse the same TCP connection (persistent connections) again and again when talking to the same server. Second, it makes sure that the courier carries as much information as possible (pipelining) so that it doesn't have to run back and forth as much. That is, not only does HTTP/1.1 use less TCP connections, it also makes sure that they are better used. The result is less traffic jam and faster delivery.

Efficient Caching

Documents you read on the Web are often read by thousands and even millions of other people at the same time. This of course keeps servers very busy. Imagine that instead of having everybody talking to the same server people could get the same information much closer to where they are. This is what caching allows us to do.

Whereas HTTP/1.0 merely enabled caching, it did not specify any well-defined rules describing how a cache should interact with clients or with origin servers. The lack of control resulted in that most content providers and users did not trust the HTTP/1.0 caching model and instead tried to short-circuit it. The result was that many busy parts of the Internet were bogged down even more. A major part of the HTTP/1.1 specification is devoted to providing a well-defined caching model which allows both servers and clients to control the level of cachability and the conditions under which the cache should update its contents.

Digest Authentication

Another important part of HTTP/1.1 is the Digest Authentication Specification. Digest authentication allows users to authenticate themselves to a server without sending their passwords in clear text which can be sniffed by anybody listening on the network. In HTTP/1.0, passwords are sent without being encrypted using so-called basic authentication. Although not providing real security, Digest Authentication is an important step in a making the Web a more secure place to live.

The HTTP History

  1. Version 0.9- raw data. There was no differentiation between pictures and text. This could be confusing at times, and hard work for the browser.

  2. Version 1.0- Typing and negotiation of data representation- MIME types

  3. Version 1.1- Improved to allow for (amongst other things) :

  • persistent connections

  • chunked-encoding (to indicate when connection is over)

  • caching

  • proxy support

  • virtual hosts

  • pipelining

The HTTP Extension Framework

A continuing area of interest is how HTTP can be extended according to the needs of specific applications. HTTP has been extended locally, as well as globally, in ways that few could have predicted. Current efforts span an enormous range, including distributed authoring, collaboration, printing, and remote procedure call mechanisms. The usual practice is to add new header fields to the protocol, and rely on the software at the other end to recognize the header and process it accordingly. This, however, is the equivalent of relying on magic! A standard framework for defining extensions has for some time been badly needed.

The HTTP Extension Framework provides a simple yet powerful mechanism for extending HTTP. The Framework enables authors to introduce extensions in a systematic manner: programmers will be able to specify which extensions are introduced along with information about who the recipient is, and how the recipient should deal with them

HTTP Related Protocols

This is a small sample of other Internet transfer protocols and information representation protocols.

  1. IMAP

The Internet Message Access Protocol, Version 4rev1 (IMAP4rev1) allows a client to access and manipulate electronic mail messages on a server. IMAP4rev1 permits manipulation of remote message folders, called "mailboxes", in a way that is functionally equivalent to local mailboxes. IMAP4rev1 also provides the capability for an offline client to resynchronize with the server.

IMAP4rev1 includes operations for creating, deleting, and renaming mailboxes; checking for new messages; permanently removing messages; setting and clearing flags; [RFC-822] and [MIME-IMB] parsing; searching; and selective fetching of message attributes, texts, and portions thereof. Messages in IMAP4rev1 are accessed by the use of numbers. These numbers are either message sequence numbers or unique identifiers.

  1. MIME

RFC 822 defines a message representation protocol which specifies considerable detail about message headers, but which leaves the message content, or message body, as flat ASCII text. MIME redefines the format of message bodies to allow multi-part textual and non-textual message bodies to be represented and exchanged without loss of information. This is based on earlier work documented in RFC 934 and RFC 1049, but extends and revises that work. Because RFC 822 said so little about message bodies, this document is largely orthogonal to (rather than a revision of) RFC 822.

  1. File Transfer Protocol (FTP)

The file transfer protocol currently most used for accessing fairly stable public information over a wide area is "Anonymous FTP". This means the use of the internet File Transfer Protocol without authentication. As the WWW project currently operates for the sake of public information, anonymous FTP is quite appropriate, and WWW can pick up any information provided by anonymous FTP. FTP is defined in RFC 959 which includes material from many previous RFCs. (See also: file address syntax ). Directories are browsed as hypertext. The browser will notice references to files which are in fact accessible as locally mounted (or on DECnet on VMS systems) and use direct access instead.

 

Network News Protocol

The "Network News Transfer Protocol" (NNTP) is defined in RFC 977 by Kantor and Lampsley. This allows transient news information in the USENET news format to be exchanged over the internet. The format of news articles is defined in RFC 850, Standard for Interchange of USENET Messages by Mark Horton. This in turn refers to the standard RFC 822 which defines the format of internet mail messages. News articles make good examples of hypertext, as articles contain references to other articles and news groups. News groups appear like directories, but more informative.

  1. Gopher

The Gopher distributed information system uses a lightweight protocol very similar to HTTP. Therefore, it is now included in every WWW client, so that the Gopher world can be browsed as part of the Web. Gopher menus are easily mapped onto hypertext links. It may be that future versions of the Gopher and HTTP protocols will converge.

  1. Z39.50

With the use of the freeWAIS software from CNIDR, the W3 software now accesses WAIS servers directly. WAIS is a variant of the z39.50 protocol. This is being developed from earlier versions which did not have the functionality required for NIR. -- see draft standards documents .

 

HTML : An Overview

The Web owes its origins to many people, starting back in medieval times with the development of a rich system of cross references and marginalia. The basic document model for the Web was set: things in the page such as the text and graphics, and cross references to other works. These early hypertext links were able to able to target documents to a fine level thanks to conventions for numbering lines or verses.

Vannevar Bush in the 1940's, in his article As we may think, describes his vision for a computer aided hypertext system he named the memex. His vivid description of browsing the Web of linked information, includes the ability to easily insert new information of your own, to add to the growing web. Dr. Bush was the Director of the US Office of Scientific Research and Development, and coordinated war time research in the application of science to war.

Other visionaries include Douglas Engelbart, who founded the Augmentation Research Center at the Stanford Research Institute (SRI) in 1963. He is widely credited with helping to develop the computer mouse, hypertext, groupware and many other seminal technologies. He now directs the Bootstrap Institute, which is dedicated to the development of collective IQ in networked communities.

Ted Nelson has spent his life promoting a global hypertext system called Xanadu. He coined the term hypertext, and is well known for his books: Literary Machines and Dream Machines, which describe hypermedia including branching movies, such as the film at the Czechoslovakian Pavilion at Expo `67.

The ACM SIGWEB, formerly SIGLINK, has for many years been the center for academic research into hypertext systems, sponsoring a series of annual conferences. SIGLINK was formed in 1989 following a workshop on hypertext, held in 1987 in Chapel Hill, North Carolina.

Bill Atkinson best known for MacPaint, an easy to use bitmap painting program, gave the world its first popular hypertext system HyperCard. Released in 1987, HyperCard made it easy for anyone to create graphical hypertext applications. It features bitmapped graphics, form fields, scripting and fast full text search. HyperCard is based on a stack of cards metaphor with shared backgrounds. It spawned imitators such as Asymmetrix Toolbook which used drawn graphics and ran on the PC. The OWL Guide was the first professional hypertext system for large scale applications, it predates HyperCard by one year and followed in the footsteps made by Xerox NoteCards, a Lisp-based hypertext system, released in 1985.

Tim Berners-Lee and Robert Caillau both worked at CERN, an international high energy physics research center near Geneva. In 1989 they collaborated on ideas for a linked information system that would be accessible across the wide range of different computer systems in use at CERN. At that time many people were using TeX and PostScript for their documents. A few were using SGML. Tim realized that something simpler was needed that would cope with dumb terminals through high end graphical X Window workstations. HTML was conceived as a very simple solution, and matched with a very simple network protocol HTTP.

CERN launched the Web in 1991 along with a mailing list called www-talk. Other people thinking along the same lines soon joined and helped to grow the web by setting up Web sites and implementing browsers, such as, Cello, Viola, and MidasWWW. The break through came when the National Center for Supercomputer Applications (NCSA) at Urbana-Champaign encouraged Marc Andreessen and Eric Bina to develop the X Window Mosaic browser. It was later ported to PCs and Macs and became a run-away success story. The Web grew exponentially, eclipsing other Internet based information systems such as WAIS, Hytelnet, Gopher, and UseNet.

 

History of the Net

This section provides an outline of events that comprise the history of the Internet. Each event is presented individually and its significance is assumed to be self-evident. A detailed historical treatment of each event is beyond the scope of this section and has no direct relevance to the history of the World Wide Web. While the section may lack in continuity, the items contained here are too diverse and disjoint to be put in narrative format in any cohesive fashion. Most of the contents of this section are based on [Zakon].

It all began a long long time ago. 1858 actually...

 


1858

  • The “Atlantic cable” was installed across the ocean with the idea of connecting the communication systems in US and Europe. While this was a great idea, the 1858 implementation of it was only operational for a few days.

  • The implementation was attempted again in 1866, and this time with great success. The original Atlantic cable laid in 1866 remained operational for almost 100 years.


1957  

  • In 1957, the Soviet Union launched Sputnik. As a response to the Soviet research efforts, president Dwight D. Eisenhower instructed the Department of Defense to establish the Advanced Research Projects Agency or ARPA. The agency started with great success and launched the first American satellite within 18 months of the agency's conception. Several years later, ARPA was also given the task of developing a reliable communications network, specifically for use by computers. The primary motivation for this was to have a network of decentralized military computers connected in such way that in the case of destruction of one or several nodes in a potential war, the network would still survive with communication lines between remaining nodes.

  • In 1962 Dr. J.C.R. Licklider was given the task of leading ARPA's research efforts in improving the use of computer technology in the military. It was due to Dr. Licklider's influence that ARPA's primary research efforts moved from the private sector to the universities around the US. His work paved the way for the creation of ARPANET.


1962

  • Paul Baran of RAND Corporation publishes the paper "On Distributed Communications Networks" which introduces Packet-switching (PS) networks; no single outage point.


1965

  • ARPA sponsors study on "cooperative network of time-sharing computers" -- TX-2 at MIT Lincoln Lab and Q-32 at System Development Corporation (Santa Monica, CA) are directly linked (without packet switches).


1967

  • At the ACM Symposium on Operating Principles, a plan was presented for a packet-switching network. Also, the first design paper on ARPANET was published by Lawrence G. Roberts.


1968

  • PS-network was presented to the Advanced Research Projects Agency (ARPA).

  • It is argued that the first packet-switching network was operational and in-place at the National Physical Laboratories in the UK. Parallel efforts in France also resulted in an early packet-switching network at Societe Internationale de Telecommunications Aeronautiques in 1968-1970. 


1969

  • First ARPANET node was established at UCLA's Network Measurements Center.

  • Subsequent nodes were established at Stanford Research Institute (SRI), University of Utah in Salt Lake City, and UCSB (UC Santa Barbara).

  • Information Message Processors (IMP) was developed by Bolt Beranek on a Honeywell DDP 516. The system delivered messages between the 4 node network above.

  • First RFC (Request For Comments), "Host Software", was submitted by Steve Crocker.


1970

  • Norman Abrahamson develops ALOHAnet at University of Hawaii. ALOHAnet provided the background for the work which later became Ethernet.

  • ARPANET hosts start using Network Control Protocol (NCP). This protocol was used until 1982 at which time it was replaced with TCP/IP.


1971

  • ARPANET had grown to 15 nodes which included 26 nodes: UCLA, SRI, UCSB, University of Utah, BBN, MIT, RAND, SDC, Harvard, Lincoln Lab, Stanford, UIUC, CWRU, CMU, and NASA(Ames).


1972

  • RFC 318: Telnet

  • Ray Tomlinson writes e-mail program to operate across networks

  • Inter-Networking Working Group (INWG), headed by Vinton Cerf, is established and given the task of investigating common protocols.

  • Public demonstration of the ARPANET by Bob Kahn of BBN. The demonstration consisted of a "packet switch", and a TIP (Terminal Interface Processor) in the basement of the Washington Hilton Hotel. The public could use the TIP to run distributed applications across the US. According to Vinton Cerf, the demonstration was a "roaring success".


1973

  • ARPANET goes international:

  • University College of London -- UK

  • Royal Radar Establishment --Norway

  • First published outline for the idea of Ethernet: Bob Metcalfe's Harvard PhD Thesis.

  • RFC 454: File Transfer Protocol (FTP)


1974

  • The design of TCP was given in "A Protocol for Packet Network Internetworking" by Vinton Cerf and Bob Kahn.


1976

  • UUCP (Unix to Unix Copy Program) is developed at AT&T Bell Labs and distributed with UNIX the following year.


1977

  • RFC 733: Mail specification

  • THEORYNET, a UUCP based email system with over 100 users is established at University of Wisconsin.

  • First demonstration of ARPANET/Packet Radio


1979

  • Computer scientists from University of Wisconsin, NSF, DARPA, and other universities meet to establish Computer Science network.

  • Tom Truscott and Steve Bellovin implement USENET.

  • Only between UNC and Duke

  • All groups originally under net.

  • Internet Configuration Board is created by ARPA.

  • PRNET (Packet Radio Network) is established.


1981

  • BITNET (Because It's Time NETwork) established.

  • CSNET (Computer Science NETwork) established.

  • Based on funding from NSF

  • Stated goal of providing network access to universities without ARPANET access


1982

  • TCP (Transmission Control Protocol) and IP (Internet Protocol) is selected as the protocol suite for ARPANET.

  • TCP/IP selected by DoD as standard

  • RFC 827: External Gateway Protocol


1983

  • Name server developed at University of Wisconsin.

  • Gateway between CSNET and ARPANET is established.

  • ARPANET is split into ARPANET and MILNET.

  • UNIX machines with built-in TCP/IP gain in popularity.

  • Internet Activities Board (IAB) replaces ICCB.

  • Tom Jennings develops FidoNet.


1984

  • Domain Name Server (DNS) introduced.

  • Over 1000 hosts

  • Japan Unix Network operational


1986

  • NSFNET created.

  • Originally composed of 5 super-computer centers connected with 56Kbps lines.

  • Other universities join in.

  • Network News Transfer Protocol (NNTP) created.

  • Mail Exchanger (MX) records developed by Craig Partridge allow non-IP network hosts to have domain addresses.


1987

  • NSF and Merit Network, Inc. agree to manage the NSFNET backbone.

  • Over 10,000 Internet hosts


1988

  • November 1- Internet worm affects 10% of hosts

  • DoD adopts OSI.

  • NSFNET backbone is upgraded to T1 (1.544Mbps)

  • Canada, Denmark, Finland, France, Iceland, Norway, Sweden are on NSFNET.


1989

  • Over 100,000 hosts

  • CSNET merges into BITNET to form Corporation for Research and Education Networking (CREN).

  • Internet Engineering Task Force (IETF) created

  • Internet Research Task Force (IRTF) created

  • Australia, Germany, Israel, Italy, Japan, Mexico, Netherlands, New Zealand, Puerto Rico, UK on NSFNET


1990

  • NSFNET replaces ARPANET

  • Peter Deutsch, Alan Emtage, and Bill Heelan at McGill release Archie

  • Argentina, Austria, Belgium, Brazil, Chile, Greece, India, Ireland, South Korea, Spain, Switzerland on NSFNET


1991

  • Wide Area Information Servers (WAIS), is invented by Brewster Kahle

  • Gopher released by Paul Lindner and Mark P. McCahill from the University of Minnesota

  • Tim Berners-Lee at CERN releases World-Wide Web (WWW)

  • NSFNET backbone upgraded to T3 (44.736Mbps)

  • NSFNET traffic passes 1 trillion bytes/month and 10 billion packets/month

  • Croatia, Czech Republic, Hong Kong, Hungary, Poland, Portugal, Singapore, South Africa, Taiwan, Tunisia on NSFNET


1992

  • Internet Society (ISOC) is formed

  • Cameroon, Cyprus, Ecuador, Estonia, Kuwait, Latvia, Luxembourg, Malaysia, Slovakia, Slovenia, Thailand, Venezuela on NSFNET


1993

  • InterNIC created by NSF

  • Over 1,000,000 hosts

  • Veronica, a gopherspace search tool, is released by University of Nevada

  • US National Information Infrastructure Act

  • WWW proliferates at a 341,634% annual growth rate of service traffic. Gopher's growth is 997%.

  • Bulgaria, Costa Rica, Egypt, Fiji, Ghana, Guam, Indonesia, Kazakhstan, Kenya, Liechtenstein, Peru, Romania, Russian Federation, Turkey, Ukrayne, UAE, Virgin Islands on NSFNET


1994

  • NSFNET traffic passes 10 trillion bytes/month

  • Percent packets and bytes in order:

  • FTP

  • WWW

  • telnet

  • Algeria, Armenia, Bermuda, Burkina Faso, China, Colombia, French Polynesia, Jamaica, Lebanon, Lithuania, Macau, Morocco, New Caledonia, Nicaragua, Niger, Panama, Philippines, Senegal, Sri Lanka, Swaziland, Uruguay, Uzbekistan on NSFNET


1995

  • NSFNET reverts back to a research network. Main US backbone traffic now routed through interconnected network providers

  • WWW surpasses ftp-data in March as the service with greatest traffic on NSFNet based on packet count, and in April based on byte count

  • Traditional online dial-up systems (Compuserve, American Online, Prodigy) begin to provide Internet access

  • Registration of domain names is no longer free. Beginning 14 September, a $50 annual fee has been imposed, which up until now was subsidized by NSF. NSF continues to pay for .edu registration, and on an interim basis for .gov

  • Technologies of the Year: WWW, Search engines

  • Emerging Technologies: Mobile code (JAVA, JAVAscript), Virtual environments (VRML), Collaborative tools


1996

Internet phones catch the attention of US telecommunication companies who ask the US Congress to ban the technology (which has been around for years)

MCI upgrades Internet backbone adding ~13,000 ports, bringing the effective speed from 155Mbps to 622Mbps.

The Internet Ad Hoc Committee announces plans to add 7 new generic Top Level Domains (gTLD): .firm, .store, .web, .arts, .rec, .info, .nom. The IAHC plan also calls for a competing group of domain registrars worldwide.

The WWW browser war, fought primarily between Netscape and Microsoft, has rushed in a new age in software development, whereby new releases are made quarterly with the help of Internet users eager to test upcoming (beta) versions.

RFC 1925: The Twelve Networking Truths

Country domains registered: Qatar (QA), Central African Republic (CF), Oman (OM), Norfolk Island (NF), Tuvalu (TV), French Polynesia (PF), Syria (SY), Aruba (AW), Cambodia (KH), French Guinea (GF), Eritrea (ER), Cape Verde (CV), Burundi (BI), Benin (BJ) Bosnia-Herzegovina (BA), Andorra (AD), Guadeloupe (GP), Guernsey (GG), Isle of Man (IM), Jersey (JE), Lao (LA), Maldives (MV), Marshall Islands (MH), Mauritania (MR), Northern Mariana Islands (MP), Rwanda (RW), Togo (TG), Yemen (YE), Zaire (ZR)

Top 10 Domains by Host #: com, edu, net, uk, de, jp, us, mil, ca, au

Technologies of the Year: Search engines, JAVA, Internet Phone

Emerging Technologies: Virtual environments (VRML), Collaborative tools, Internet appliance (Network Computer)


1997

71,618 mailing lists registered at Liszt, a mailing list directory

The American Registry for Internet Numbers (ARIN) is established to handle administration and registration of IP numbers to the geographical areas currently handled by Network Solutions (InterNIC), starting March 1998.

Domain name business.com sold for US$150,000

Early in the morning of 17 July, human error at Network Solutions causes the DNS table for .com and .net domains to become corrupted, making millions of systems unreachable.

101,803 Name Servers in whois database

Country domains registered: Falkland Islands (FK), East Timor (TP), R of Congo (CG), Christmas Island (CX), Gambia (GM), Guinea-Bissau (GW), Haiti (HT), Iraq (IQ), Libya (LY), Malawi (MW), Martinique (MQ), Montserrat (MS), Myanmar (MM), French Reunion Island (RE), Seychelles (SC), Sierra Leone (SL), Somalia (SO), Sudan (SD), Tajikistan (TJ), Turkmenistan (TM), Turks and Caicos Islands (TC), British Virgin Islands (VG), Heard and McDonald Islands (HM), French Southern Territories (TF), British Indian Ocean Territory (IO), Svalbard and Jan Mayen Islands (SJ), St Pierre and Miquelon (PM), St Helena (SH), South Georgia/Sandwich Islands (GS), Sao Tome and Principe (ST), Ascension Island (AC), US Minor Outlying Islands (UM), Mayotte (YT), Wallis and Futuna Islands (WF), Tokelau Islands (TK), Chad Republic (TD), Afghanistan (AF), Cocos Island (CC), Bouvet Island (BV), Liberia (LR), American Samoa (AS), Niue (NU), Equatorial New Guinea (GQ), Bhutan (BT), Pitcairn Island (PN), Palau (PW), DR of Congo (CD)

Top 10 Domains by Host #: com, edu, net, jp, uk, de, us, au, ca, mil

Technologies of the Year: Push, Multicasting

Emerging Technologies: Push, Streaming Media [:twc:]


1998

US Depart of Commerce (DoC) releases the Green Paper outlining its plan to privatize DNS on 30 January. This is followed up by a White Paper on June 5

Web size estimates range between 275 (Digital) and 320 (NEC) million pages for 1Q

Network Solutions registers its 2 millionth domain on 4 May

Electronic postal stamps become a reality, with the US Postal Service allowing stamps to be purchased and downloaded for printing from the Web.

Compaq pays US$3.3million for altavista.com

Indian ISP market is deregulated in November causing a rush for ISP operation licenses

US DoC enters into an agreement with the Internet Corporation for Assigned Numbers (ICANN) to establish a process for transitioning DNS from US Government management to industry (25 November)

Country domains registered: Nauru (NR), Comoros (KM)

Bandwidth Generators: Winter Olympics (Feb), World Cup (Jun-Jul), Starr Report (11 Sep), Glenn space launch

Top 10 Domains by Host #: com, net, edu, mil, jp, us, uk ,de, ca, au

Technologies of the Year: E-Commerce, E-Auctions, Portals

Emerging Technologies: E-Trade, XML, Intrusion Detection


1999

Internet access becomes available to the Saudi Arabian (.sa) public in January

IBM becomes the first Corporate partner to be approved for Internet2 access

US State Court rules that domain names are property that may be garnished

MCI/Worldcom, the vBNS provider for NSF, begins upgrading the US backbone to 2.5GBps

MCI/Worldcom launches vBNS+, a commercialized version of vBNS targeted at smaller educational and research institutions

ISOC approves the formation of the Internet Societal Task Force (ISTF). Vint Cerf serves as first chair

business.com is sold for US$7.5million (it was purchased in 1997 for US$150,000 (30 Nov)

Top 10 TLDs by Host #: com, net, edu, jp, uk, mil, us, de, ca, au

Technologies of the Year: E-Trade, Online Banking, MP3

Emerging Technologies: Net-Cell Phones, Thin Computing, Embedded Computing


2000

The US timekeeper (USNO) and a few other time services around the world report the new year as 19100 on 1 Jan

Web size estimates by NEC-RI and Inktomi surpass 1 billion indexable pages

Various domain name hijackings took place in late May and early June, including internet.com, bali.com, and web.net

A testbed allowing the registration of domain names in Chinese, Japanese, and Korean begins operation on 9 November. This testbed only allows the second-level domain to be non-English, still forcing use of .com, .net, .org. The Chinese government blocks internal registrations, stating that registrations in Chinese are its sovereignty right

ICANN selects new TLDs: .aero, .biz, .coop, .info, .museum, .name, .pro (16 Nov)
These domains will not be available until sometime in 2001 after contract negotiation and US Dept of Commerce approval

Technologies of the Year: ASP, Napster

Emerging Technologies: Wireless devices, IPv6

Lawsuits of the Year: Napster, DeCSS

 

 

Commercialization of the Technology

Commercialization of the Internet involved not only the development of competitive, private network services, but also the development of commercial products implementing the Internet technology. In the early 1980s, dozens of vendors were incorporating TCP/IP into their products because they saw buyers for that approach to networking. Unfortunately they lacked both real information about how the technology was supposed to work and how the customers planned on using this approach to networking. Many saw it as a nuisance add-on that had to be glued on to their own proprietary networking solutions: SNA, DECNet, Netware, NetBios. The DoD had mandated the use of TCP/IP in many of its purchases but gave little help to the vendors regarding how to build useful TCP/IP products.

In 1985, recognizing this lack of information availability and appropriate training, Dan Lynch in cooperation with the IAB arranged to hold a three day workshop for ALL vendors to come learn about how TCP/IP worked and what it still could not do well. The speakers came mostly from the DARPA research community who had both developed these protocols and used them in day to day work. About 250 vendor personnel came to listen to 50 inventors and experimenters. The results were surprises on both sides: the vendors were amazed to find that the inventors were so open about the way things worked (and what still did not work) and the inventors were pleased to listen to new problems they had not considered, but were being discovered by the vendors in the field. Thus a two way discussion was formed that has lasted for over a decade.

After two years of conferences, tutorials, design meetings and workshops, a special event was organized that invited those vendors whose products ran TCP/IP well enough to come together in one room for three days to show off how well they all worked together and also ran over the Internet. In September of 1988 the first Interop trade show was born. 50 companies made the cut. 5,000 engineers from potential customer organizations came to see if it all did work as was promised. It did. Why? Because the vendors worked extremely hard to ensure that everyone's products interoperated with all of the other products - even with those of their competitors. The Interop trade show has grown immensely since then and today it is held in 7 locations around the world each year to an audience of over 250,000 people who come to learn which products work with each other in a seamless manner, learn about the latest products, and discuss the latest technology.

In parallel with the commercialization efforts that were highlighted by the Interop activities, the vendors began to attend the IETF meetings that were held 3 or 4 times a year to discuss new ideas for extensions of the TCP/IP protocol suite. Starting with a few hundred attendees mostly from academia and paid for by the government, these meetings now often exceeds a thousand attendees, mostly from the vendor community and paid for by the attendees themselves. This self-selected group evolves the TCP/IP suite in a mutually cooperative manner. The reason it is so useful is that it is comprised of all stakeholders: researchers, end users and vendors.

Network management provides an example of the interplay between the research and commercial communities. In the beginning of the Internet, the emphasis was on defining and implementing protocols that achieved interoperation. As the network grew larger, it became clear that the sometime ad hoc procedures used to manage the network would not scale. Manual configuration of tables was replaced by distributed automated algorithms, and better tools were devised to isolate faults. In 1987 it became clear that a protocol was needed that would permit the elements of the network, such as the routers, to be remotely managed in a uniform way. Several protocols for this purpose were proposed, including Simple Network Management Protocol or SNMP (designed, as its name would suggest, for simplicity, and derived from an earlier proposal called SGMP) , HEMS (a more complex design from the research community) and CMIP (from the OSI community). A series of meeting led to the decisions that HEMS would be withdrawn as a candidate for standardization, in order to help resolve the contention, but that work on both SNMP and CMIP would go forward, with the idea that the SNMP could be a more near-term solution and CMIP a longer-term approach. The market could choose the one it found more suitable. SNMP is now used almost universally for network based management.

In the last few years, we have seen a new phase of commercialization. Originally, commercial efforts mainly comprised vendors providing the basic networking products, and service providers offering the connectivity and basic Internet services. The Internet has now become almost a "commodity" service, and much of the latest attention has been on the use of this global information infrastructure for support of other commercial services. This has been tremendously accelerated by the widespread and rapid adoption of browsers and the World Wide Web technology, allowing users easy access to information linked throughout the globe. Products are available to facilitate the provisioning of that information and many of the latest developments in technology have been aimed at providing increasingly sophisticated information services on top of the basic Internet data communications.

 

History of the Future

On October 24, 1995, the FNC unanimously passed a resolution defining the term Internet. This definition was developed in consultation with members of the internet and intellectual property rights communities. RESOLUTION: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term "Internet". "Internet" refers to the global information system that -- (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.

The Internet has changed much in the two decades since it came into existence. It was conceived in the era of time-sharing, but has survived into the era of personal computers, client-server and peer-to-peer computing, and the network computer. It was designed before LANs existed, but has accommodated that new network technology, as well as the more recent ATM and frame switched services. It was envisioned as supporting a range of functions from file sharing and remote login to resource sharing and collaboration, and has spawned electronic mail and more recently the World Wide Web. But most important, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars of annual investment.

One should not conclude that the Internet has now finished changing. The Internet, although a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television industry. It will, indeed it must, continue to change and evolve at the speed of the computer industry if it is to remain relevant. It is now changing to provide such new services as real time transport, in order to support, for example, audio and video streams. The availability of pervasive networking (i.e., the Internet) along with powerful affordable computing and communications in portable form (i.e., laptop computers, two-way pagers, PDAs, cellular phones), is making possible a new paradigm of nomadic computing and communications.

This evolution will bring us new applications - Internet telephone and, slightly further out, Internet television. It is evolving to permit more sophisticated forms of pricing and cost recovery, a perhaps painful requirement in this commercial world. It is changing to accommodate yet another generation of underlying network technologies with different characteristics and requirements, from broadband residential access to satellites. New modes of access and new forms of service will spawn new applications, which in turn will drive further evolution of the net itself.

The most pressing question for the future of the Internet is not how the technology will change, but how the process of change and evolution itself will be managed. As this paper describes, the architecture of the Internet has always been driven by a core group of designers, but the form of that group has changed as the number of interested parties has grown. With the success of the Internet has come a proliferation of stakeholders - stakeholders now with an economic as well as an intellectual investment in the network.

We now see, in the debates over control of the domain name space and the form of the next generation IP addresses, a struggle to find the next social structure that will guide the Internet in the future. The form of that structure will be harder to find, given the large number of concerned stake-holders. At the same time, the industry struggles to find the economic rationale for the large investment needed for the future growth, for example to upgrade residential access to a more suitable technology. If the Internet stumbles, it will not be because we lack for technology, vision, or motivation. It will be because we cannot set a direction and march collectively into the future.

Figure 9. Internet Development