This month, a group of scientists, technologists, philosophers, and business leaders assembled in Oxford to discuss the future of robotics in our world. There, they unveiled a document that goes by the ominous code BS8611. Its goal? To outline a set of ethical design guidelines for the entire U.K. robotics industry to adhere to, you know, so the Terminator or any other number of dystopian scenarios doesn’t happen.
Developed by the British Standards Institute (BSI)–which creates all sorts of industrial policies for the country–BS8611 is what “the first published standard for the ethical design of robots” outside Isaac Asimov’s Three Laws of Robotics, which were of course, fiction.
Co.Design acquired a copy of the report to read for ourselves. And the advice is both terrifying and hilarious. Though experts periodically warn of these dangers, we’ve grown accustomed them over the decades; visit any Reddit thread and you’ll spot some acquiescence to our “robot overlords.” 24 years after Asimov’s death, these guidelines still read like sci-fi, as if the BSI was lifting the best bits from their favorite TV shows and movies.
The reality, of course, is that today’s robotics and AI are getting much closer to those fictions, and now, roboticists are getting serious about guiding how they should be designed. We’ve annotated our favorites moments from the report, along with their sci-fi analogs.
5.1.1 General societal ethical guidelines
What society considers to be ethical issues should be identified and defined by engaging with end users, specific stakeholders and the public. The following principles should be taken into account:
a) robots should not be designed solely or primarily to kill or harm humans;
5.1.6 Dehumanization of humans in the relationship with robots
Robots and robotic systems should be designed to avoid inappropriate control over human choice, for example forcing the speed of repetitive tasks on an assembly line. The ultimate authority should stay with the human.
5.1.13 Robot addiction
The human potential to be behaviourally conditioned and to become addicted to using the robot should not be negatively exploited.
5.1.14 Dependence on robots
Circumstances where the human might become unnecessarily reliant on the robot should be taken into account. This can be an individual or global issue. At an individual level it is necessary to balance the benefits of using robotics with the risks of dependency. For example, dependence on a robotic wheelchair might be positive for a person with a permanent disability but damaging to someone who has the potential (if exercised) to recover from a disabling condition.
…Markings, symbols and written warnings should be readily understandable and unambiguous, especially as regards the part of the function(s) of the robot to which they are related. Readily understandable signs (pictograms) should be used in preference to written warnings. However, signs and pictograms should only be used if they are understood in the culture of the region in which the robot is to be sold.
Yet as funny as these analogs may be, the fact that Wall-E or Futurama so clearly articulate the contemporary ethical issues around the design of robots speaks to the relevance of sci-fi in both predicting impending disasters, and hopefully, steering our course away from them in the future. The very real, looming problems of robots that mirror our own prejudices, or fail to recognize accepted social norms, will necessitate sensitive UX thinking baked into the very core of the robots of tomorrow. It just so happens that sci-fi authors were talking about it yesterday.