The US Navy is funding a research effort to equip robots with the decision making tools necessary to make split-second decisions between right and wrong. Engineers, computer scientists, and others from Tufts, Brown, and Rensselaer Polytechnic are working on an ethical dilemma in which “a robot medic responsible for helping wounded soldiers is ordered to transport urgently needed medication to a nearby field hospital.” On its way, the robot encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it?
What’s the Big Idea?
The extent to which a robot can act morally depends on how closely scientists can approximate human thought patterns using complex computational algorithms. “Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree,” says principal investigator Matthias Scheutz, professor of computer science at Tufts School of Engineering and director of the Human-Robot Interaction Laboratory (HRI Lab) at Tufts. Robots must also be programmed to explain their decisions in ways that are acceptable to humans.