The Origins of Computer Networking

 

Breyer, R. and S. Riley, Switched, Fast, and Gigabit Ethernet, MacMillan Technical Press, 1999.

 

Feit, S., TCP/IP, McGraw-Hill, 1999.

 

Raza, K., and M. Turner, Large-Scale IP Network Solutions, Cisco Press, 2000.

 

U.S. DoD Advanced Research Projects Agency (ARPA):

 

In the 1960s the DoD desired a technology that would permit the nuclear weapons arsenal to maintain communications in the event of nuclear attack.  Therefore, they initiated a project to develop a robust, self-annealing communications architecture.

 

General Networking:

 

MIT (Leonard Kleinrock, Ph.D., later at UCLA), Rand Corporation (Paul Baran), and the National Physical Laboratory (NPL, Donald Davies,UK) (See for instance, http://www.cs.utexas.edu/users/dprince/cs370/think/npl) independently developed the conceptual framework for what is called packet switching.  These ideas were first published in 1966 – 1967.  These systems first went live in 1969 -1970 or thereabouts.  In the U.S. the first network was called the ARPANET.  It connected four resource computers (SDS, IBM, and Digital) connected to Information Message Processors (Honeywell minicomputers programmed by Bolt, Beranek, and Newman, Later BBN, see http://www.bbn.com/)

 

Access Technologies:

 

Ethernet:

 

University of Hawaii (Norman Abramson) developed a contention-based, single-channel broadcast radio network to communicate between an IBM main-frame and card readers located on the various islands.  Remember that computer programs were stored on cards so this permitted remote users to submit computer jobs from remote locations to the single, costly computer.  It operated at 4800 (later 9600 bits per second).   It was called the ALOHA system.

 

XEROX Palo Alto Research Center (PARC) hired Bob Metcalfe in 1972.  They assigned him the task of connecting the PARC computers to the ARPANET.  He discovered the ALOHA network papers in the library, made some changes, and in May, 1973 demonstrated connecting several ALTO computers to the new PARC-developed laser printer.  It operated at 2.94 Mbps and used half-inch co-axial cable.  This date marks the birth of the local area network (LAN) for most practical purposes.  Note the motivation was the same then as now – connect to those laser printers.  Metcalfe and co-workers formalized what networkers now commonly call CSMA/CD – carrier sense, multiple access with collision detection.  Also, Metcalfe coined the term Ethernet, which is still commonly used.

 

In 1979 Xerox released the Ethernet standard as v.1 as part of the Digital, Intel, Xerox consortium.  As a result of this the base Ethernet is often called DIX Ethernet.  In 1982 a revised v. 2 of the DIX Ethernet was released, specifying an operating speed of 10 Mbps.

In the meantime the Institute of Electrical and Electronic Engineers (IEEE) convened a parallel project in February of 1980.  They called it Project 802, and to this day, IEEE 802-series standards govern LAN access protocols.  The IEEE established the 802.3 sub-committee in June, 1981 with the mission of producing an internationally recognized standard for the Ethernet protocol.  The DIX v. 2 Ethernet is the standard frame-type used in most LANs today.  The existence of three 802.3 variants that are usually not widely used is a source of endless confusion for those entering the networking business.

 

At this time Metcalfe and some others formed the Computer, Communication, and Compatibility Corporation, which is now known as 3Com Corp. They produced the first commercial TCP/IP stack for UNIX in 1980 and in 1982 they introduced the first 3Com EtherLink Ethernet card for IBM PCs using the ISA bus.  Since that time Ethernet networking has raced forward, first moving from co-axial cable to hubs and unshielded twisted pair wiring, to 100 Mbps architectures, to largely-switched, full-duplex networks, to gigabit Ethernet, with 10 Gbps networks emerging as a future technology for the network core.

 

Token Ring:

 

At the same time Ethernet was developed in the academic/ARPA domain, computer giant IBM was developing a competing technology called Token Ring.  Rather than depending on CSMA/CD, it used a token access method to the physical medium.  A token is passed around a ring-topology network, and a station desiring to transmit data seizes the token, modifies it, and attaches the data.  This technology scaled much better than Ethernet because the larger the network, the higher the probability of multiple simultaneous attempts to access the network.  This results in collisions and retransmission, which only exacerbates the problem.  This technology also worked with IBM mainframes of the day while Ethernet did not.  Therefore, large corporations and government agencies with IBM mainframe installations and deep pockets (do they go hand-in-hand?) tended to install Token Ring networks in the mid- to late-1980s and early-1990s.  However, the hardware was always more expensive than Ethernet hardware and used cumbersome, proprietary connectors.  Therefore, Token Ring is fading away from the network scene.  However, there are a large number of Token Ring networks currently installed and a networking professional must be aware of Token Ring.

 

Transport Technology:

 

TCP/IP:

 

The ARPANET grew from four computers in 1969 to a few more than 100 in 1980 (These were computers directly connected over a telecommunications backbone and does not include the bourgeoning LANs on corporate campuses such as PARC and on various universities).  The original protocol to govern communications between these various nodes was the Network Control Protocol (NCP).  ARPANET hosts began using this protocol in 1970.  This set of protocols did not scale well as the ARPANET began to grow, and it tended to be unstable when the transmission lines picked up noise.  Therefore, various researchers began work almost immediately on a replacement protocol. 

 

In 1974 Vinton Cerf and Robert Kahn proposed a new protocol for addressing hosts and for finding those hosts on the network.  They called this protocol Internet Protocol (IP).  Simultaneously, they published a protocol setting up network conversations and controlling the quality of the transmitted data in the sense that the nodes could agree that they would communicate using various parameters, including ones that could demonstrate that all packets had been received and that they could be reassembled correctly to give the original message.  They called this protocol the Transmission Control Protocol (TCP).  Together, these two protocols provide the backbone protocol suite of most modern networking – TCP/IP.  A second upper layer protocol was later added to accommodate quick communications that do not require more than a single packet.  This protocol is called the User Datagram Protocol (UDP).  A variety of other protocols were added through the years to support the primary functionality of these three protocols.  They will provide the grist for many of the talks in this course.

 

Starting in 1980, researchers worked nearly three years to convert the ARPANET from NCP to TCP/IP.  In this period of time the ARPANET added 200 more nodes to reach a total of 300 nodes.  In 1983 the DoD issued a policy statement requiring TCP/IP to be implemented in military networks, and they funded BBN to develop a TCP/IP stack that could be grafted onto the UNIX operating system.  They also funded UC Berkeley to include TCP/IP into the BSD 4.2  operating system at about the same time.

 

At this point the basic concepts of networking that we now know as the internet were in place.  However, we will consider shortly the administrative infrastructure that was required to permit the internet to support the unprecedented growth that took place in the early 1990s.  This sustained growth resulted in large part from the structured way in which IP network addresses are assigned.

 

NetBEUI:

 

While various academic and DOD-funded companies were working on the infant internet, commercial companies were beginning to examine the market for LAN products.  As the largest computer company in the world, IBM was one of the obvious competitors.  They developed a lightweight network function/service naming protocol called NetBIOS (Network Basic Input/Output System).  They also developed simple mechanism for sending an application’s data to the access protocol.  They called it the Network BIOS Extended User Interface (NetBEUI), and it became the basic networking protocol for the IBM PC and for IBM mainframe communication.  Unfortunately, NetBEUI protocols specified no information that could be linked to a globally significant address.  It depended on NetBIOS names and name-derived services, and it depended on broadcasting those calls onto the network.  This is the network equivalent of shouting at the waiter for service.  As a result of these design restrictions, administrators and engineers could link multiple networks only if they contained no overlapping names spaces.  In effect, every computer on the network needed a unique name for each service it offered.  In addition, as administrators added more computers to the network the volume of service “shouts” on the network increased until no other traffic could get on the wire.  Today NetBEUI is rarely encountered.

 

IPX/SPX:

 

Novell produced a much more successful product in the LAN management arena.  They called it NetWare.  In all of it’s variations it used almost every variation of the Ethernet frame available, and it ran very successfully on Token Ring networks.  It utilized the Internet Packet exchange (IPX) to provide the ability to provide distinct network addresses for different networks.  Therefore, a device could be configured to translate packets from one network address to another.  As a result, traffic from separate networks could be connected or routed to one another by ensuring that the networks were connected by one of these devices.  Either a computer with multiple network cards or a purposely built router could perform this function.  The Novell protocol stack also used the Sequenced Packet exchange (SPX) protocol to ensure that two end-stations could co-ordinate the packets in a communication session so that receive buffers were not overrun, packets were not lost and were reassembled correctly, etc. 

 

NetWare managed this transport suite with a proprietary management suite called the NetWare Core Protocols (NCP, not to be confused with the Network Control Protocol mentioned previously).  This protocol suite proved to be very effective in the early days of LAN computing, and, to this day, various estimates place current networks at up to 50% legacy NetWare LANs.  Unfortunately, it used a large number of broadcast functions to maintain management knowledge of the LAN, including various services.  In addition, the network numbering scheme did not enforce orderly numbering, so routing between large numbers of LANs tended to be inefficient.  As a result, starting with NetWare 5, Novell recommended installing the TCP/IP stack.  Today it is usually only seen in legacy networks, but for certain uses it can still be invoked as the protocol stack even for Microsoft networks.

 

The Internet:

 

In January, 1983 the internet really came into existence.  An unclassified military net (MILNET, later Defense Data Network or DDN) was split from ARPANET that year, but by the end of 1984 ARPANET had grown to more than 1000 hosts.  In 18 months approximately three times as many hosts had been added to ARPANET as had been added in the preceding 14 years.  At about the same time (actually 1982) a TCP/IP exterior gateway protocol with the same name (abbreviated EGP) was defined to link local LANs to the ARPANET core.  Local routers that connected a LAN to the internet used what is called a default route to the internet or ARPANET.  Essentially, packets to any hosts that the router did not recognize as a local host were dumped to a default ARPANET router.  Inside the ARPANET core the routers needed to know the location of or route to every network.  This is called default-free routing. 

 

In the mid-1980s the National Science Foundation (NSF) became involved in designing the evolving ARPANET.  They developed a three-tier network, the NSFNET, with a default-free core, a set of mid-level networks for regional distribution of traffic, and a much larger set of campus-access networks.  As a function of the NSF the newer network still reflected a strong academic bias.  It was developed to connect researchers across the nation with the system of six NSF supercomputer facilities that had been built across the nation.  These networks essentially operated over 56 kbps links leased from the telephone system.  Carnegie-Mellon University maintained both an ARPANET  and NSFNET router installation, and, thus, when it interconnected the two networks it became the first internet exchange.

 

In 1990 the DoD dissolved its management of the ARPANET and its facilities were assimilated into the NSFNET.  Over this period the links connecting the network nodes together were upgraded to T1 links (1.544 Mbps) and the routing facilities were upgraded from Digital LSI-11 computers to IBM RT computers.  Based on a proposal from Merit, Inc., a company associated with the University of Michigan, these upgraded facilities were centralized into 13 regional nodal system switching subsystems (NSSs) scattered across the U.S.  With the closure of the ARPANET a fourteenth node was added at Atlanta.  By the time of the dissolution of ARPANET the internet represented more than 300,000 hosts and 2,000 networks.  Traffic on the backbone increased sufficiently that by 1993 the router connections were increased to T3 (44.7 Mbps) connections.  At about this time commercial routers such as Cisco’s AGS line began to replace computer-based routers run on IBM RS/6000s. 

 

Two Federal Internet exchanges (FIX East, College Park, MD and FIX West, NASA Ames, CA) interconnected NSFNET with government networks run by the DOE, NASA, and DoD.  Access to this set of networks was provided via several associated commercial vendors such as UUNET, PSI, and ANS, as well as the “Big Four” federal networks: NSFNET, NSI ESENET, and DDN.  Previously, in 1991 several startup providers had established the Commercial Internet exchange (CIX) to interconnect their customers.  The NSFNET and the other federal networks connected to the growing set of commercial internet connections via a third major exchange MAE-East operated by MCI. 

 

In 1990 Merit joined IBM and MCI to form Advanced Network and Services (ANS) which managed the NSFNET under government contract.  In 1993 the internet was fully privatized.  Four network access points (NAPs) were established:

·         MFS (now MCI-Worldcom) “MAE-East” (Washington, D.C.)

·         Sprint NAP (Pennsauken, N.J.

·         Ameritech NAP (Chicago, IL)

·         PACBELL NAP (San Francisco, CA)

As the internet has continued to develop the NAPs have become less central to interconnecting the major network service providers (NSPs) such as ATT, MCI and Sprint.  They have been replaced in many instances by privately-operated internet exchanges such as the MAE sites operated by Worldcom (see http://www.mae.net).  At these sites users (internet service providers or ISPs) must manually configure the routes that they have contracted with other ISPs.  The ISPs must also provide the trunk facilities to reach the exchange, generally by leasing line capacity from Worldcom.

 

In 1993 MCI also won a contract to implement a very high speed backbone to interconnect the NSF-funded supercomputing facilities and act as a test platform for newer backbone technologies.  It is used as testbed for the Internet II initiative to provide the next generation of internet facilities.

 

Internet Management:

 

At the outset of the internet the government funding agency (ARPA) involved a wide variety of academic and independent experts in the management of the internet.  This was done through the Internet Architecture Board (IAB).  The IAB oversees the Internet Engineering Task Force (IETF), which is the primary protocol promulgation body for the internet.  It produces protocols in the form of Requests for Comment (RFC).  The RFCs proceed through several states in the course of their lifetime:

·         Experimental                 Not on the standards track, ongoing research

·         Proposed                      On the standards track, revision likely

·         Draft                            Under consideration as a standard, testing and comment required

·         Standard                       Official protocol

·         Informational                 Published by vendors or another standards group, inform the community

·         Historical                      Superseded or not useful

The protocols have a status that specifies how they are applied:

·         Required

·         Recommended

·         Elective

·         Limited Use

·         Not Recommended

The activities of the IETF are reviewed by the Internet Engineering Steering Group (IESG).  In 1992 the as the internet was being privatized, the Inernet Society was formed and the IAB and its subfunctions were assimilated into the new organization.

 

For many years the allocation of internet resources such as namespaces and IP addresses were managed by the DDN Network Information Center (DDN NIC).  When the internet was handed over to private management beginning in 1993, the NSF contracted Network Solutions, Inc. (NSI) to provide these services.  NSI formed the InterNIC, which managed this function until 1998.  When that contract expired the InterNIC spun off a non-profit registration center called the American Registry for Internet Numbers (ARIN) that allocates IP addresses in the Americas and Africa.  Similar organizations were defined for Eurasia and the Pacific rim countries (Reseaux IP Europeens Network Coordination Center or RIPE for Europe and Asia-Pacific Network Information Center (APNIC) for the Asian basin.  The Internet Assigned Number Authority (IANA, formerly with the University of Southern California Information Services Institute) managed the allocation of IP address numbers to these organizations, but it does not actively assign IP addresses to individual organizations.  The assignment of IP addresses is increasingly decentralized as organizations such as ARIN assign blocks of numbers to ISPs which, in turn, allocate these addresses to their customers.  Today, very few non-ISP organizations acquire private IP addresses.  Also in the last two years name registration responsibilities have been greatly decentralized, with most major ISPs and many minor organizations offering these services.

 

In the End:

 

Two important pieces of information to remember about the internet are that

1)      network access protocols, especially for internal LANs, is managed by the IEEE, primarily the 802 group

2)      internet protocols are managed by the IETF.

A wide variety of bodies provide protocols governing the interconnection of the LAN with the internet, the wide are network of WAN.  These protocols are often managed by international registries such as the International Telecommunications Union (ITU) and are the topics of more advanced courses such as the Cisco CCNP-level training.