When I visited UCLA’s Boelter Hall last Wednesday, I took the stairs to the third floor, looking for Room 3420. And then I walked right by it. From the hallway, it’s a pretty unassuming place.
But something monumental happened there 50 years ago today. A graduate student named Charley Kline sat at an ITT Teletype terminal and sent the first digital data transmission to Bill Duvall, a scientist who was sitting at another computer at the Stanford Research Institute (now known as SRI International) on the other side of California. It was the beginning of ARPANET, the small network of academic computers that was the precursor to the internet.
At the time, this brief act of data transfer wasn’t anything like a shot heard round the world. Even Kline and Duvall didn’t appreciate the full significance of what they’d accomplished: “I don’t remember anything specifically memorable about that night, and I certainly didn’t realize that what we had done was anything special at the time,” says Kline. But their communications link was proof of the feasibility of the concepts that eventually enabled the distribution of virtually all the world’s information to anybody with a computer.
Today, everything from our smartphones to our garage door openers are nodes on the network that descended from the one Kline and Duvall tested that day. How they and others established the original rules for shuttling bytes around the world is a tale worth sharing—especially when they tell it themselves.
“That better never happen again”
Even back in 1969, many people had helped set the stage for Kline and Duvall’s breakthrough on the night of October 29–including UCLA professor Leonard Kleinrock, whom I spoke with along with Kline and Duvall as the 50th anniversary approached. Kleinrock, who is still at UCLA today, told me that ARPANET was, in a sense, a child of the Cold War. When the Soviet Union’s Sputnik 1 satellite blinked across U.S. skies in October 1957, it sent shockwaves through both the scientific community and political establishment.
Sputnik’s launch “caught the United States with its pants down, and Eisenhower said, ‘That better never happen again,'” recounts Kleinrock when I spoke with him in Room 3420, which is now known as the Kleinrock Internet History Center. “So in January ’58, he formed the Advanced Research Projects Agency (ARPA) within the Department of Defense to support STEM—science, technology, engineering, and mathematics—in United States universities [and] research labs.”
By the mid-1960s, ARPA had provided funding for large computers used by researchers in universities and think tanks around the country. The ARPA official in charge of the financing was Bob Taylor, the key figure in computing history who later ran Xerox’s PARC lab. At ARPA, he had become painfully aware that all those computers spoke different languages and couldn’t talk to each other.
Taylor hated the fact that he had to have separate terminals—each with its own leased communication line—to connect with various remote research computers. His office was full of Teletypes.
“I said, oh, man, it’s obvious what to do. If you have these three terminals, there ought to be one terminal that goes anywhere you want to go,” Taylor told the New York Times’s John Markoff in 1999. “That idea is the ARPANET.”
Taylor had an even more practical reason to crave a network. He was regularly getting requests from researchers around the country for funds to buy bigger and better mainframe computers. He knew that much of the computing power the government was funding was being wasted, explains Kleinrock. When a researcher maxed out system resources at SRIin California, for example, another mainframe at MIT might be sitting idle, perhaps after regular business hours on the East Coast.
Or it might be that a mainframe at one site contained some software that might be useful in other places, such as the pioneering ARPA-funded graphics software developed at the University of Utah. Without a network, “if I’m here at UCLA and I want to do graphics, I’m going to go to ARPA—please buy me that machine so I can have it too,'” says Kleinrock. “Everybody wanted everything.” By 1966 ARPA had grown weary of such requests.
The problem was that all those computers spoke different languages. Back at the Pentagon, Taylor’s computer scientists explained that all those research computers were running different code sets. There was no common networking language, or protocol, by which computers located far away from each other could connect to share content or resources.
That soon changed. Taylor talked ARPA director Charles Herzfeld into allocating a million dollars for R&D into a new network to connect the computers at MIT, UCLA, SRI, and many other sites. Herzfeld got the money by redirecting it from a ballistic missile research program into the ARPA budget. The cost was justified within DoD circles by saying that ARPA was tasked with building a “survivable” network that wouldn’t go down if any specific part was destroyed, perhaps in a nuclear attack.
ARPA brought in Larry Roberts, an old MIT buddy of Kleinrock’s, to manage the ARPANET project. Roberts turned to the work of the British computer scientist Donald Davies and the American Paul Baran for the data carriage techniques they’d invented.
And Roberts soon called on Kleinrock to work on the theoretical aspect of the project. He’d been thinking about the problem of data networking since 1962 when he was still at MIT.
“At MIT, as a graduate student, I decided to address the following problem: I was surrounded by computers and they couldn’t talk to each other, and I knew sooner or later they’d have to,” says Kleinrock. “Nobody was looking at that problem. They were all studying information theory and coding theory.”
Kleinrock’s major contribution to ARPANET was something called queuing theory. Back then, communication links were analog lines you could rent from AT&T. They were circuit-switched lines, meaning that a central switch set up a dedicated connection between a sender and a receiver, whether they were two people engaged in a phone call or a terminal connecting to a distant mainframe. There was a lot of downtime on those circuits when words weren’t being said or when bits weren’t being transferred.
Kleinrock felt this was a wildly inefficient way to set up connections between computers. Queuing theory provides a way for data packets from different communications sessions to share links dynamically. While one stream of packets pauses, another unrelated one might utilize the same link. The packets comprising one communication session (say, an email send) might find their way to the receiver using four different routes. If one route was disabled, the network would route the packets through a different one.
During our conversation in Room 3420, Kleinrock showed me his dissertation on all this, sitting in a red binder on one of the tables. He published his research in book form in 1964.
In this new kind of network, the movement of the data was directed not by a central switch but by devices at the network nodes. In 1969, these network devices were called IMPs, or “internet message processors.” Each machine was a ruggedized, modified version of a Honeywell DDP-516 computer that contained specialized hardware for network control.
The original IMP was delivered to Kleinrock at UCLA on Labor Day in 1969. Today, it stands like a monolith in the corner of Room 3420 at Boelter Hall, where it has been restored to look like it did when it handled the first internet transmission 50 years ago.
“15-hour days every day”
In the fall of 1969, Charley Kline was a graduate student trying to finish his degree in engineering. He was one of a group of graduate students that moved onto the ARPANET project after Kleinrock received government funding to help develop the network. In August, Kline and others on the project worked diligently to prepare the software on UCLA’s Sigma 7 mainframe computer to connect to the IMP. Since there was no standard interface between a computer and an IMP—Bob Metcalfe and David Boggs wouldn’t invent Ethernet until 1973–the group built a 15-foot connection cable from scratch. Now all they needed another computer to communicate with.
SRI was the second research site to get an IMP, in early October. For Bill Duvall, that kicked off a period of intense preparation to get ready for the first transmission from UCLA to SRI’s SDS 940. The UCLA and SRI teams had committed to creating the first successful transmission by October 31, he told me.
“I basically jumped in and designed and implemented the software, and it was one of those intense things that happen in software, which is 15-hour days every day for however long it takes,” he remembers.
As Halloween neared, the pace of the work at both UCLA and SRI ratcheted up. They were ready to go before the deadline arrived.
“Now we had two nodes, and we leased this line from AT&T at the blazing speed of 50,000 bits per second,” says Kleinrock. “So now we’re ready to do it, to log in.”
“The first test that we scheduled was on October 29,” adds Duvall. “It was pre-alpha at that point. And, you know, we thought, well, okay, that’ll give us three testing days to get this up and running.”
On the night of the 29th, Kline was working late. So was Duvall at SRI. The two had planned to attempt the first ARPANET message at night, so that nobody’s work would be affected should one of the computers crash. In Room 3420, Kline sat alone in front of his terminal, an ITT Teletype that was connected to the computer.
Here’s what happened that night—complete with one of the more historic crashes in computing history—in Kline and Duvall’s own words:
Kline: I was logged into the Sigma 7 operating system and then I [ran] the program that I had written, which allowed me to then tell that program to try to send packets to SRI. Meanwhile, Bill Duvall at SRI had run his program to accept incoming connections. And we were also on the phone with each other.
We had a few problems in the beginning. I had a problem with code translation, because our system used EBCDIC (Extended Binary Coded Decimal Interchange Code), which was the standard that IBM used and also the standard that the Sigma 7 used. But the SRI computer used ASCII (American Standard Code for Information Interchange), which also became the standard of the ARPANET and pretty much the world.
So after we got a few of those little bugs worked out, we tried to actually log in . . . and you did that by typing the word “login.” That [SRI] system had been programmed to be smart, so that it recognized valid commands. And if you had it in the advanced mode, when you typed the “L” and the “O” and the “G,” it recognized that you must be meaning to type “LOGIN” and it would type the “I N” for you. So I typed the L.
I was on the phone [with Duvall at SRI] and said, ‘Did you get the L?’ And he said, ‘Yeah.’ And I said that I saw that the “L” come back and print on my terminal. And I typed the “O” and he said, “Got the O.” And I typed the G, and he said, “Wait a minute, my system crashed.”
Duvall: After a couple of letters, there was a buffer overflow issue. That was a very simple thing to detect and fix, and basically it came right back up and it worked. The only reason I mentioned that is that, in my view, this whole thing is not about that. This is about the fact that the ARPANET works.
Kline: He had a little minor bug, and it took him 20 minutes or whatever to fix it and try it again. He had to make a change in some software. I had to double-check some of my software. He called me back, and we tried it again. So we started over and I typed the L and the O and the G, but this time I got back the “I N.”
“Just engineers working”
The initial connection happened at 10:30 PT in the evening. After that, Kline was able to sign into an account on the SRI computer that Duvall had created for him and start running programs, using the system resources of a computer 350 miles up the coast from UCLA. In a small way, ARPANET’s mission had been accomplished.
“By that time, it was getting late, so I went home,” Kline told me.
The team knew it had succeeded, but didn’t dwell on the magnitude of its accomplishment: “It was just engineers working,” says Kleinrock. Duvall saw the October 29 connection as just one step in the larger challenge of networking computers. Where Kleinrock’s work focused on how data packets could be directed around a network, the SRI researchers had worked on how a packet is constructed and how the data inside it is organized.
“This was basically where the paradigm that we see now on the internet with linked documents and things like that was first developed,” Duvall says. “We always envisioned that we would have a series of interconnected workstations and interconnected people. We called them knowledge centers in those days, because we were academically oriented.”
Within a few weeks of Kline and Duvall’s first successful communication, the ARPA network extended to computers at UC Santa Barbara and the University of Utah. And ARPANET grew from there, through the ’70s and much of the 1980s, connecting more and more government and academic computers. And later the concepts developed in ARPANET would be applied to the internet we know today.
Back in 1969, a UCLA press release touted the new ARPANET. “As of now, computer networks are still in their infancy,” it quoted Kleinrock as saying. “But as they grow up and become more sophisticated, we will probably see the spread of ‘computer utilities,’ which, like present electric and telephone utilities, will service individual homes and offices across the country.”
That concept sounds a little quaint now that data networks reach far past homes and offices and down to the smallest internet-of-things devices. But Kleinrock’s statement about “computer utilities” was remarkably prescient, especially given that the modern, commercialized internet did not come into being until decades later. The idea remains fresh in 2019, even as computing resources are well on their way to being as ubiquitous and easy to take for granted as electricity.
Maybe anniversaries like this one are good opportunities to not only remember how we got to this highly connected era, but also look out into the future—like Kleinrock did—to think about where the network might be headed next.