advertisement
advertisement

Teaching Ethics to Robo Warriors

i-robot05

Robots just follow instructions right? They can’t be moral on the battlefield. They simply do what they’re told. A group of researchers at Cal State Polytechnic disagrees, and in a just-published report sponsored by the Office of Naval Research–the Navy’s skunk-works program–they’re advocating cutting edge research into teaching robots battfield ethics. As the Times of London reports:

“There is a common misconception that robots will do only what we have programmed them to do,” Patrick Lin, the chief compiler of the report, said. “Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person.” The reality, Dr Lin said, was that modern programs included millions of lines of code and were written by teams of programmers, none of whom knew the entire
program: accordingly, no individual could accurately predict how the various portions of large programs would interact without extensive testing in the field–an option that may either be unavailable or deliberately sidestepped by the designers of fighting robots.


The solution, he suggests, is to mix rules-based programming with a period of “learning” the rights and wrongs of warfare

Lin further postulates that we’ll need a warrior code for robots, similar to those dreamed up by Isaac Asimov nearly 60 years ago and popularized in the movies Terminator 2 and I, Robot. The full report, available here, mulls a range of hypotheses–including how a robot might figure out what level of retaliation is justified, how it might learn to discriminate targets in messy urban conflicts where civilian and combatant lines are blurred, and how we’ll handle robots in courts, when they inevitably cause unwanted casualties. 

 CK