Fast Company

Application Progamming Interfaces Are Not a Substitute for Ethics

Cascio

If you follow my Twitter feed, you might have seen a post come through recently offering, without context, this observation:

APIs [Application Programming Interfaces] are not a substitute for ethics.

I've been doing quite a bit of work on the impacts of the emerging tools allowing us to manipulate our perceptions of the world (e.g., neurotechnology), our physical environment (e.g., geoengineering), and the building blocks of the material world itself (e.g., synthetic biology and molecular nanotechnology). There's a theme that recurs across all of these arenas: what happens when someone does something careless or malicious with the technology? It's bad enough when the technology in question is an automobile or computer network. These emerging disciplines fall into a category I sometimes call "catalytic innovations," and one characteristic is that the worst-case misuse scenarios can be truly terrifying.

For some, the knee-jerk response is a desire to prohibit the development of these technologies. As appealing as that might sound, it suffers from a fundamental flaw: these technologies do not require a massive industrial base, so surreptitious development would be far harder to detect than (say) nuclear weapons development. Ultimately, the only way to enforce the ban would likely be with constant, unrelenting, global surveillance. Few of us, even those afraid of the potential of these catalytic technologies, would be willing to take that path.

A more nuanced response, and one that I see frequently from the proponents of these various technologies, is that well-designed systems could make catastrophic misuse difficult, even impossible. A synthetic biology lab-in-a-box, for example, might be pre-programmed with a variety of forbidden combinations of bio-components, perhaps with limiting and tracking components built into every synthbio design. A molecular nanofactory could have similar restrictions. Whatever the system, if there's a programming interface, there's the potential for automatic limits on output.

This is a manifestation of a philosophy I see quite often online across a wide array of subjects, that of "tools, not rules"--don't try to get people to change their behavior, alter systems to shape the results of their behavior.

There's certainly a great deal of sense to this notion. Accidents can happen, so tools that stop people from doing something destructive can be of enormous value, even when they're as simple as a dialogue box popping up saying "Are you sure you really want to delete your life's work?"

But this model does little to prevent misbehavior arising from novel approaches, nor from abuses that fall within the system rules, but are still harmful. Some of us might understand the latter as "griefing"--taking advantage of system functions to harm the play of other players. We might also understand it as what happens when people follow the letter of the law, but not the "spirit of the law." This kind of behavior is certainly bad when it comes to elements of society such as finance, but when combined with techno-social developments that quite literally have the power to reshape the planet, this kind of behavior is potentially deadly.

If the tools are insufficient, then, we're left with rules.

The best kind of rules are those we apply to ourselves, those we believe in. Ethics--sometimes thought of as "how you behave when no-one is looking"--have the advantage of being readily applied to novel situations, and able to guide responses fitting the spirit of the law. People in positions of social power (such as doctors and lawyers) often receive training in ethics as part of their educations. What I'd like to see is the introduction of ethics training in these new catalytic disciplines.

Computer programmers, biotechnologists, environmental scientists, neuroscientists, nanotech engineers--all of these fields, and more, should have at least a course in ethics as part of their degree requirements. Ideally, it should be a recurring element in every class, so that it's not seen as just another hoop to jump through (check off the "is this ethical? Y/N" box), but instead as a consideration woven into every professional decision.

This is one reason, by the way, that I was so frustrated with the proposed curriculum for the "Singularity University"--the study of ethics was shoehorned in with policy and law in a way that appeared to be something of an "oh, by the way" add-on.

But my larger point is this: as tempting as it is to rely on well-structured tools to prevent disastrous outcomes, even the best tools are ultimately insufficient. Good interfaces need to be accompanied by strong ethics. It's not just a matter of right and wrong; increasingly, it's a matter of survival.

Images

APIstwitter, biohazard symbol, Smoky the Nanobot

Read more of Jamais Cascio's Open the Future blog.

Add New Comment

0 Comments