advertisement
advertisement
  • 09.10.09

Application Progamming Interfaces Are Not a Substitute for Ethics

There’s no substitute for ethical behavior.

Cascio

If you follow my Twitter feed, you might have seen a post come through recently offering, without context, this observation:

advertisement

APIs [Application Programming Interfaces] are not a substitute for ethics.

I’ve been doing quite a bit of work on the impacts of the emerging tools allowing us to manipulate our perceptions of the world (e.g., neurotechnology), our physical environment (e.g., geoengineering), and the building blocks of the material world itself (e.g., synthetic biology and molecular nanotechnology). There’s a theme that recurs across all of these arenas: what happens when someone does something careless or malicious with the technology? It’s bad enough when the technology in question is an automobile or computer network. These emerging disciplines fall into a category I sometimes call “catalytic innovations,” and one characteristic is that the worst-case misuse scenarios can be truly terrifying.

A more nuanced response, and one that I see frequently from the proponents of these various technologies, is that well-designed systems could make catastrophic misuse difficult, even impossible. A synthetic biology lab-in-a-box, for example, might be pre-programmed with a variety of forbidden combinations of bio-components, perhaps with limiting and tracking components built into every synthbio design. A molecular nanofactory could have similar restrictions. Whatever the system, if there’s a programming interface, there’s the potential for automatic limits on output.

This is a manifestation of a philosophy I see quite often online across a wide array of subjects, that of “tools, not rules“–don’t try to get people to change their behavior, alter systems to shape the results of their behavior.

There’s certainly a great deal of sense to this notion. Accidents can happen, so tools that stop people from doing something destructive can be of enormous value, even when they’re as simple as a dialogue box popping up saying “Are you sure you really want to delete your life’s work?”

If the tools are insufficient, then, we’re left with rules.

advertisement

The best kind of rules are those we apply to ourselves, those we believe in. Ethics–sometimes thought of as “how you behave when no-one is looking”–have the advantage of being readily applied to novel situations, and able to guide responses fitting the spirit of the law. People in positions of social power (such as doctors and lawyers) often receive training in ethics as part of their educations. What I’d like to see is the introduction of ethics training in these new catalytic disciplines.

Computer programmers, biotechnologists, environmental scientists, neuroscientists, nanotech engineers–all of these fields, and more, should have at least a course in ethics as part of their degree requirements. Ideally, it should be a recurring element in every class, so that it’s not seen as just another hoop to jump through (check off the “is this ethical? Y/N” box), but instead as a consideration woven into every professional decision.

This is one reason, by the way, that I was so frustrated with the proposed curriculum for the “Singularity University“–the study of ethics was shoehorned in with policy and law in a way that appeared to be something of an “oh, by the way” add-on.

But my larger point is this: as tempting as it is to rely on well-structured tools to prevent disastrous outcomes, even the best tools are ultimately insufficient. Good interfaces need to be accompanied by strong ethics. It’s not just a matter of right and wrong; increasingly, it’s a matter of survival.

Images

APIstwitter, biohazard symbol, Smoky the Nanobot

Read more of Jamais Cascio’s Open the Future blog.

Video