The US Navy is working with several universities on a new multi-year project designed to figure out how to engineer moral competence. One big challenge: Science still doesn't know exactly how it works in humans.
Researchers from several universities, including Tufts, Brown, and Rensselaer Polytechnic, will join with the US Navy on a multi-year project designed to answer a simple question: Can we build robots who are able to distinguish between right and wrong? To do this, the multidisciplinary team first has to figure out how the process works in humans, which Brown’s Bertram Malle says is a big challenge in and of itself: “There is a fair amount of scientific knowledge available, but there are still many unanswered questions.” Down the road, the enhanced robots will interact with humans in experiments that will require them to make decisions and explain them in a way that makes sense.
What’s the Big Idea?
Rensselaer’s Selmer Bringsjord says the need for morally competent robots is clear: “We’re talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don’t have to tell them what to do. When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite rule set created ahead of time by humans can anticipate every possible scenario.”