Tom Knight, Godfather Of Synthetic Biology, On How To Learn Something New

The MIT computing wiz and key player in the recent technological revolution talks about his path from AI to DNA.

It was partly frustration with designing silicon chips that led Tom Knight to the study of biology. A senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, Knight started working in MIT’s AI Lab while he was in high school. As an MIT student and faculty member, in the ’60s and ’70s Knight was a co-engineer of ARPANET, a precursor of the Internet, and helped design the first commercial single-user computer workstations, eventually earning more than 30 patents for his work in computer science and electrical engineering. In the 1990s, Knight became fascinated with biology, went back to school, and set up a molecular biology lab within MIT’s computer science lab. There, Knight invented BioBricks--standardized DNA “parts” that make up a kind of free operating system for biotechnology. For his pioneering work merging concepts from engineering and biology, Knight is widely considered the godfather of the emerging science of synthetic biology. Here, this key player in the technological revolution of the last century talks about biology as this century’s defining technology, the need for scientific generalists, and the best way to learn something new.

FAST COMPANY: Internet legend has it that you started at MIT when you were 14?
TOM KNIGHT: Well, that story has gotten a little overblown. I entered as a regular undergrad at the normal time. But I was a local boy--I grew up in Wakefield, Massachusetts--and I spent a lot of my high school years at MIT, taking courses in computer programming and organic chemistry, and I spent my junior and senior summers working at the artificial intelligence lab there.

So, did you study computer engineering as an undergrad?
You couldn’t really study computer science then--it was the bastard child of electrical engineering. This was the dawn of the artificial intelligence world at that point, and people had only been working on it for five years or so.

Did you go directly into a grad school?
I took a fair amount of time off, working as a research staff member at MIT from 1969 to 1978, partly because I could get a draft deferment. During that time, I did a lot of hardware and software work having to do with operating systems, hardware maintenance, and the construction of new computers. One of the important things I helped develop was a time-sharing system called ITS that now nobody knows about, which was oriented to making users as productive as possible. It’s hard to remember how bad computers were at that time--we struggled mightily to get computers that had a megabyte of core storage. Another important thing we worked on in that period of the late '60s, early '70s, was interfacing with ARPANET, which later became the NSF Net, and later the Internet. We also designed one of the first bitmap-oriented printers, which was made obsolete when laser printers came along.

Were you making money from any of this?
My master’s thesis when I went back to grad school in 1978 was building a computer to more efficiently run the Lisp programming language, which I worked on with my MIT colleague Richard Greenblatt. That eventually resulted in the formation of spinout companies--unfortunately two instead of one. Greenblatt and I did not see eye to eye about how to commercialize the technology, so he started Lisp Machines, and I and a number of others started our own company called Symbolics [symbolics.com was the first registered .com domain name]. Both companies were successful--Symbolics went public and resulted in several thousand machines being distributed.

How did you get into biology from computing?
In the 1980s, I learned how to engineer integrated circuits, and as part of my PhD thesis, I designed one of the first silicon retinas. Looking at Moore’s Law, which predicts the path of technology in silicon, by 1990 I could predict that at some point--which is right about now--you wouldn’t be able to do the magical shrinking act anymore [of fitting more and more transistors on an integrated circuit]. The number of atoms across the transistor becomes too small. We’re now down to the 22 nanometer range, and another shrink will bring that down to 10 nanometers. That’s only about 60 atoms across. If you shrink that another factor, you have 10 or 12 atoms. The way silicon manufacturing works, you put things in place statistically, randomly. At this size, chances are you’re not going to be able to get things in the right place anymore. It was clear that we needed a different way of putting atoms in the right place. There is a technology for putting atoms where you want them--it’s called chemistry. You design a molecule, and that has the atoms where you want them. What’s the most sophisticated kind of chemistry? It’s biochemistry. I imagined that you could use bio-molecules like proteins that have the ability to self-assemble and crystallize in the range you needed.

So, you were hoping that biology could help you better engineer silicon chips?
Yes, that was part of what got me interested in biology. Something else that really changed my thinking was a proposal by a physicist-turned-biologist names Harold Morowitz called “A Complete Understanding of Life.” How can you not like something like that! His basic proposal was that we have all this advanced technology--if we put our minds to it and applied all this technology, we could actually understand how simple organisms work. My general bias toward biology at that point was, Oh my god, it’s so complicated, we’ll never figure out what’s going on--in contrast to something like computers where you can understand everything. It was really quite amazing to see somebody proposing what I’d assumed was impossible. I got quite intrigued by the idea that I could go and do something with biology.

But you were no expert on biology--how did you get up to speed?
Starting in 1990 or so, I started seriously looking at classical biology books, with a strong concentration on simple organisms. I started taking the graduate core courses in biology at MIT and basically became a student. It was challenging but very effective for educating myself. In 1995, along with one of my students, I took the sophomore undergraduate intro to molecular biology class--that was fun, learning how to pipette and work in the lab.

Do you have any study tips for other people who are trying to learn a new subject?
I like to read books, three or four at a time. I rarely read books all the way through. I’ll get a few books on a subject--you want single-author books, someone with a well-defined point of view--and read a section, and then switch to a different book and read about the same thing. I keep switching back and forth--it’s a great technique because you get to look at the same subject from many people’s perspective. That turns out to be actually really useful.

How did your outsider's perspective as a computer engineer inform your approach to biology?
After setting up a molecular biology lab in the computer science department at MIT, it became clear to me that I didn’t want to do plumbing in the way biology had been doing it for two decades. My basic realization was that every time I wanted to do one experiment, it was actually two experiments: 1. the experiment I wanted to do, and 2. building the piece of DNA I wanted. The second experiment was not that intellectually interesting, and it wasn’t publishable. It just became annoying. The question was, how do you get rid of that?

And that led you to the idea of BioBricks, or standard biological parts?
Right. When you’re learning how to do cloning, there are 50 different things you have to think about. You don’t want to think about 50 things when you want to build something. You just want to do it. So the idea of a standard mechanism to pull that off--to make it take no thought whatsoever--became very important. I distributed the first kit for BioBrick cloning at a DARPA meeting in 2002. Since then it’s been adopted by the iGem program [a genetic engineering competition for undergraduates], and this fall we will have over 10,000 parts in the .

As you’ve said, biology is really complicated--it’s not exactly like a computer. One of the complaints I’ve heard from synthetic biologists is that, at least in the early days, the BioBrick parts didn’t always work like they were supposed to.
The mechanics for assembling the parts are there. Are there problems with individual parts in the registry? Certainly. But the situation is much better than the situation before the collection existed, where you had nothing and didn’t know if anything was going to work. That’s a big difference. Something beats nothing every time. On the issue of quality, look at the projects that are actually built by iGem students every summer. Those projects were built with those parts. You can claim all you want that the parts don’t work, but just go and look at the projects.

You’ve been involved in a bunch of tech startups, recently as a cofounder at Ginkgo, which the website says, “sells engineered organisms that make the world better.” What’s your role there? How is Ginkgo different than your previous businesses?
I’m the white-haired guy with the white beard! It’s easy to say I’m adult supervision, but these people are actually as responsible as I am. This is a remarkable group of people. And I think the company is mostly characterized by what it’s not. I’ve been involved with a lot of venture-funded companies. We’ve had no outside funding at this point, which is a remarkable statement and opportunity. We really get to decide where we’re going, what’s important, and what’s not. We decide how we’re going to spend our time, and what the time horizons are. We’re not going to do things that are splashy and have major VC kind of publicity in the two- or three-year period you might not typically associate with a startup. We’re looking at a period closer to 10 years than three years.

I’ve heard a lot about what synthetic biology is going to do for us--we'll be able to engineer organisms that produce abundant biofuels, churn out new medicines and sustainable materials, detect and mitigate environmental pollution. But there’s not that much to show yet. Why should we believe the hype?

I’ve been around technology for a long time. When people are asked what wonderful things are going to happen in five or 10 years, they always overestimate what happens in five years, and always underestimate what happen in 10. Imagine you’re at the beginning of the semiconductor era and someone says, now predict iPads for me. You’re not going to be able to. I can’t tell you what’s going to happen. But stand back--this is the technology of the century. This is going to change how we build things. Biology is fundamentally a manufacturing technology, and we’re on the verge of figuring out how to control that. It's impossible to predict and estimate the impact of that, but it’s going to be massive.

Synthetic biology is one example of a new cross-disciplinary field that nobody could have really imagined 20 years ago. How should someone who’s a student now prepare for the next big thing in science and engineering--without knowing what that’s even going to be?

The very best you can do is become a science generalist. Learn as much as you can about all aspects of science and engineering. Take things apart and see how they work. That’s increasingly hard to do. It almost doesn’t matter what it is--if you feel like working on cars or writing programs, great. Do something, make something. Take things apart and use the pieces to make something else.

People make the mistake of thinking that everything they need to know can be learned in a biochemistry course. You need it all--and the sooner you get started, the better. If you don’t know something, you can go and learn it yourself. What is graduate education about? At the end of the day, it’s learning how I can become an expert on a subject in two days and apply that to solve a problem that I just discovered I have. You’re not going to be able to predict what it is you need.

[Image: Flickr user Alfred Hermida]

Add New Comment

5 Comments

  • Wjones677

    The technique for learning that he uses is comparative reading which is a highly skilled form of reading. It's covered in the book "How to Read" which was written in 1940 but still readily available and still reads quite well.

  • Shalyse

     The author did told us how to learn something new, but by an indirect way. His experience--from a computer expert to a biologist.And the last paragraph also point out a way to learn something new,that is,"Learn as much as you can about all aspects of science and engineering".as I understand, apply the knowledge in one specific field to another one, or combine the knowdege, ...