Your phone’s battery life could always be better, but it could have been much worse. Decades after John Hennessy and David Patterson invented the technology that made high-performance, low-power gadgets possible, they have received the ultimate honor—the $1 million ACM A.M. Turing Award, billed as the “Nobel Prize of Computing.” You can thank them quietly whenever you pull your phone out.
In the 1980s, the two professors (Hennessy from Stanford, Patterson from Berkeley) developed technology called RISC—the reduced instruction set computer. The gist: A CPU runs more efficiently if software feeds it a lot of simple instructions instead of fewer, but more complicated ones. Early programmers and chip designers took the latter route, since they could write shorter code, and leave the CPU to unpack it. But as programming languages got more sophisticated in the 1970s and 1980s, they took over the grunt work, translating high-level code written by humans into the litany of instructions the CPU needs. That made Hennessy and Patterson’s RISC concept make sense without inconveniencing software engineers.
In the 1990s, RISC chips like IBM’s PowerPC and Sun Microsystems’ SPARC (which Patterson helped develop) foundered against rivals that used Intel’s venerable complex-instruction set, x86, which has powered most PCs since the beginning. Intel-based chips’ brute force (and power from a wall outlet) compensated for their more-demanding code. But in this century, RISC took off as mobile phones needed chips that were both fast and power-efficient. Nearly every smartphone, tablet, and other mobile gadget now uses a RISC architecture called ARM, the basis of processors developed by companies like Apple, Samsung, and Qualcomm.
I chatted with Hennessy and Patterson about their role ushering in the mobile age and also about the big shakeups on computing’s horizon.
Fast Company: When you were developing RISC, what was the problem, and how did you try to address it?
David Patterson: When software talks to hardware, it has a special vocabulary. The fashion before we did RISC was a very elaborate vocabulary with lots of $5 words . . . We went instead with a very small vocabulary that had very simple words. So the reduced instruction set means reducing the number and complexity of the vocabulary that the software speaks to the hardware.
FC: Why was that more efficient for the processor?
John Hennessy: Imagine with those really big words, you need fewer of them, but the reader is four times slower because he has to spend all the time looking them up.
FC: Is there something that’s analogous to pulling out the dictionary?
Patterson: Yeah, it’s called microcode. They had to run a little tiny program that would interpret that instruction.
FC: How do full-size computer CPUs like Intel’s work differently?
Hennessy: They have this older polysyllabic instruction set . . . They would translate the $5 words into several simple words, and then the rest of the processor would use the simpler words. So you need extra hardware, and that takes extra time.
Patterson: When you talk about the processors in a mobile phone, and you care about battery life, then that extra overhead is a big issue, which is why [x86 chips] are in your desktop, but they’re not in your mobile phone.
FC: SPARC and PowerPC were based on RISC, and they lost out. Was that disappointing?
Patterson: When x86 did hardware translation to RISC, Intel was able to build very cost-efficient servers, which is where SPARC was trying to sell . . . We never got everybody [unified] around one architecture. And the result is the RISC world was fractured, and Intel wasn’t fractured.
FC: ARM [avoided fracturing] on the mobile side, right?
Patterson: They came in the back door into the mobile phone, and they didn’t try to compete with Intel . . . And once they were in there, it was very easy to go into smartphones and tablets, because you already owned the mobile phone business.
FC: Looking to the future, what are you most excited about?
Hennessy: I’m excited about RISC-V, which is an open-source effort. If we’re to take on the important challenges of security, we need an open effort that everybody can work on, not just a few companies.
Patterson: I’m excited about what are called domain-specific architectures, processors that help certain [computationally intensive] domains. The most obvious one is neural networks . . . For example, a deep learning problem that’s doing lots of linear algebra . . . Perhaps we can build hardware that does those things very fast and focuses on them, and it will operate as an accelerator or a coprocessor.
FC: In the computer world, we used to have lots of discrete cards and chips, and then it was celebrated when we brought it all together in the CPU.
Hennessy: We’re doing this out of desperation. It would be better to make better general processors, but we can’t yet. So what we’re doing is one piece at a time for how to make those processes go fast, and then [to save power] you don’t use that hardware when you don’t plan to do that special task like deep learning neural networks
FC: Any parting thoughts on the future of computing?
Patterson: I think it’s a renaissance era. New approaches are going to come to the market. If you just want to continue doing the same old thing, you’re not going to get much action. But if you’re willing to put new stuff on your plate, then it’s a very exciting time.