advertisement
advertisement

How NASA gave birth to modern computing—and gets no credit for it

The space agency and MIT’s bet on the integrated circuit kicked off the digital age.

How NASA gave birth to modern computing—and gets no credit for it
[Source Photos: Flickr user Steve Jurvetson (DSKY), Apple (Screen)]

This is the 13th in an exclusive series of 50 articles, one published each day until July 20, exploring the 50th anniversary of the first-ever Moon landing. You can check out 50 Days to the Moon here every day. 

advertisement
advertisement

The computer that flew the astronauts to the Moon—the Apollo guidance computer—was a marvel of the 1960s: small, fast, nimble, and designed for the people who were using it, astronauts flying spaceships.

Each Apollo mission had two identical computers, one in the command module, one in the lunar module, each of which were programmed for the very different missions of those spacecraft. They could handle 85,000 instructions a second, which sounds pretty impressive until you realize that an iPhone X can handle 5 trillion. So it would take the Apollo flight computer 681 days to do the work your iPhone can do in one second.

But if it was basic, the Apollo computer was not in any way primitive. Just the opposite.

It also represented a huge leap for NASA. The risk was in the cutting-edge technology that MIT used to squeeze as much power and speed into the computer’s slim, briefcase-sized case, one of the boldest and riskiest bets of the whole Moon mission, and one that few knew about or appreciated at the time.

The computer was designed and programmed by a division of MIT known as the Instrumentation Lab (which has since become an independent R&D lab, the Charles Stark Draper Lab).

The DSKY input module (right) shown alongside the Apollo Guidance Computer’s main casing (left). [Photo: NASA/Wiki Commons]
The MIT Instrumentation Lab tried to design the Apollo computer using transistors, which in the early 1960s were well-settled technology—reliable, understandable, relatively inexpensive. But 15 months into the design effort, it became clear that transistors alone couldn’t give the astronauts the computing power they needed to fly to the Moon. In November 1962, MIT’s engineers got NASA’s permission to use a very new technology: integrated circuits. Computer chips.

advertisement

Today, computer chips run the world. They are as important as concrete, or electricity itself. They are faultlessly dependable. But in 1962, integrated circuits were an all-new technology, and as with most new tech, they were flaky and costly. As part of its early evaluation, MIT bought 64 integrated circuits from Texas Instruments. The price was $1,000 each, or $9,000 apiece in 2019 dollars. Each had six transistors.

But integrated circuits would change what it was possible for the Apollo computer to do. They would increase its speed by 2.5 times while allowing a reduction in space of 40% (the computer didn’t get smaller; it just got packed with more capacity). The Apollo computers were the most sophisticated general-purpose computers of their moment. They took in data from dozens of sensors, from radar, directly from Mission Control. They kept the spaceships oriented and on course. They were, in fact, capable of flying the whole mission on autopilot, while also telling the astronauts what was going on in real time.

Flatpack integrated circuits in the Apollo guidance computer. [Photo: NASA/Wiki Commons]
MIT did two things to solve the problems of those first integrated circuits. Working with early chip companies—Fairchild Semiconductor, Texas Instruments, Philco—it drove the manufacturing quality of computer chips up by a factor of 1,000. MIT had a battery of a dozen acceptance tests for the computer chips it bought, and if even one chip in a lot of 1,000 failed one test, MIT packed up the whole lot and sent it back.

And MIT, on behalf of NASA, bought so many of the early chips that it drove the price down dramatically: from $1,000 a chip in that first order to $15 a chip in 1963, when MIT was ordering lots of 3,000. By 1969, those basic chips cost $1.58 each, except they had significantly more capability, and a lot more reliability, than the 1963 version.

MIT and NASA were able to do all that because for year after year, Apollo was the No. 1 customer for computer chips in the world.

In 1962, the U.S. government bought 100% of integrated circuit production.

advertisement

In 1963, the U.S. government bought 85%.

In 1964, 85%.

In 1965, 72%.

Even as the share dropped, total purchasing soared. The 1965 volume was 20 times what it had been just three years earlier.

Inside the government, there was only NASA using the chips, and the Air Force’s Minuteman missile, a relatively small project compared with the Apollo computers.

Without knowing it, the world was witnessing the birth of “Moore’s Law,” the driving idea of the computer world that the capability of computer chips would double every two years, even as the cost came down.

advertisement

In fact, Fairchild Semiconductor’s Gordon Moore wrote the paper outlining Moore’s Law in 1965, when NASA had been the No. 1 buyer of computer chips in the world for four years, and the only user of integrated circuits that Moore cites by name in that paper is “Apollo, for manned Moon flight.” Moore would go on to cofound and lead Intel, and help drive the digital revolution into all elements of society.

What was Moore doing when he conceived of Moore’s Law? He was director of research and development. Fairchild’s most significant customer: MIT’s Apollo computer.

It’s a part of the history of modern computing that Silicon Valley manages to skip over most of the time, but MIT, NASA, and the race to the Moon laid the very foundation of the digital revolution, of the world we all live in.

Sure, we’d have iPhones if we hadn’t flown to the Moon, and word processors, and Jeff Bezos probably would have founded Amazon.com.

But just because something would have happened anyway doesn’t mean you take credit from those who drove it. Apollo dramatically accelerated the pace of the digital revolution by transforming the technology at the heart of it: the integrated circuit.

We actually have a perfect test case of how important MIT’s decision to use computer chips in 1962 turned out to be, because as NASA was choosing computer chips for its flagship effort, the world’s most important computer company rejected them for its flagship project.

advertisement

In the early 1960s, IBM was preparing to introduce the IBM 360 series of mainframe computers. IBM then had two-thirds of the U.S. market in computers. The 360 was designed to break open general-purpose computing for businesses, to let companies use computers in all the ways they could imagine. It was a huge bet: IBM’s revenue at the time was $2.5 billion annually, and the 360 cost $5 billion to develop.

IBM looked hard at using integrated circuits, but it decided they were too risky, too new, too undeveloped. The IBM 360 series was a precise contemporary of the Apollo computers: It was announced in 1964, and customers started buying it in 1965. Indeed, it was a huge hit; the business scholar Jim Collins ranks the IBM 360’s impact in the U.S. economy with that of the Ford Model T and Boeing’s first passenger jet, the 707.

Among the customers for the IBM 360: MIT and NASA. The computer was used to write software for the Apollo computer, and the IBM 360 was the core of the computing power in Mission Control during Apollo. Just without integrated circuits.

Integrated circuits might have been the future, but not even the biggest, most powerful computer company in the world was ready to use them. That was left to MIT’s cutting-edge spaceflight computer.

There is a faint halo of disappointment around Apollo: We flew to the Moon in 1969, but we don’t live in the world of The Jetsons or Star Trek 50 years later. So what, exactly, did we get out of going to the Moon? The judgment is that it was a spectacular one-off.

But in fact, we do live in the world Apollo helped create. The pioneering computer chips that flew to the Moon created the market for the computer chips that did everything else.
 Apollo didn’t give us the Space Age, but it helped usher in the Digital Age, which may be more important.

advertisement

The Apollo computer’s sleek featureless exterior case conceals one more amazing quality: It was assembled by hand, in a way unlike any computer before or since. Which we’ll explain in the next installment.


Charles Fishman, who has written for Fast Company since its inception, has spent the last four years researching and writing One Giant Leap, a book about how it took 400,000 people, 20,000 companies, and one federal government to get 27 people to the Moon. (You can order it here.)

For each of the next 50 days, we’ll be posting a new story from Fishman—one you’ve likely never heard before—about the first effort to get to the Moon that illuminates both the historical effort and the current ones. New posts will appear here daily as well as be distributed via Fast Company’s social media. (Follow along at #50DaysToTheMoon.)

advertisement
advertisement

About the author

Charles Fishman, an award-winning Fast Company contributor, is the author of One Giant Leap: The Impossible Mission that Flew Us to the Moon. His exclusive 50-part series, 50 Days to the Moon, will appear here between June 1 and July 20.

More