Artificial intelligence developed in the 20th century can already do much of what distinguished human beings from animals for millennia. The computational power of computer programs, which mimic human intelligence, is already far superior to own own. And ongoing developments in natural language and emotion detection suggest that AI will continue its encroachment on the domain of human abilities.
Works of science-fiction have already imagined humanoid robots that think and, crucially, feel like human beings. Harvard Law Professor Glenn Cohen identifies the films A.I., originated by Stanley Kubrick and directed by Steven Spielberg, and Ex Machina as standard bearers in the ongoing discussion about what actually distinguishes humans from machines.
While the question may seem fanciful, abstract, and even unnecessary, it’s actually quite essential, says Cohen. Our lamentable history of denying certain classes of humans basic rights — blacks and women are obvious examples — may even make the question urgent.
At the heart of the matter is the distinction between people and human beings. A person must be afforded essential rights, like the right of non violability, and this makes defining a person as a human being very attractive. It is, however, not an unproblematic definition. Cohen paraphrases the influential animal liberation philosopher Peter Singer to say, “the kind of things that have moral consideration on the basis of the mere fact that they’re not a member of your species…is equivalent morally to rejecting giving rights or moral consideration to someone on the basis of their race. So he says speciesism equals racism.”
To further complicate defining persons as human beings, Cohen identifies some human beings that are not persons: anencephalic children, i.e. babies born who are missing large portions of their brain. “They’re clearly members of the human species,” says Cohen, “but their ability is to have kind of capacities most people think matter are relatively few and far between.” So we reach an uncomfortable position where some human beings are not persons and some persons may not be human beings.
Faced with this dilemma, Cohen suggests we err on the side of recognizing more rights, not fewer, lest we find ourselves on the wrong side of history.
Glenn Cohen’s book is Patients with Passports: Medical Tourism, Law, and Ethics.
Glenn Cohen: The question about how to think about artificial intelligence in personhood and writes artificial intelligence I think is really interesting. It's been teed up I think in two particularly good films. AI, which I really like but many people don't; it was a Stanley Kubrick film that Steven Spielberg took over late in the process. And then Ex Machina more recently, which I think most people think is quite a good film. And I actually use these when I teach courses on the subject and we ask the question are the robots in these films are they persons yes or no? One possibility is you say a necessary condition for being a person is being a human being. So many people are attracted to the argument say only humans can be persons. All persons are humans. Now maybe not be that all humans are persons, but all persons are humans. Well, there's a problem with that and this is put most forcefully by the philosopher Peter Singer, the bioethicist Peter Singer who says to reject a species, possibility of the species has rights not to be a patient for moral consideration, the kind of things that have moral consideration on the basis of the mere fact that they're not a member of your species he says is equivalent morally to rejecting giving rights or moral consideration to someone on the basis of their race. So he says speciesism equals racism.
And the argument is imagine that you encountered someone who is just like you in every possible respect but it turned out they actually were not a member of the human species, they were a Martian let's say or they were a robot and truly exactly like you. Why would you be justified in giving them less moral regard? So people who believe in capacity X views have to at least be open to the possibility that artificial intelligence could have the relevant capacities, albeit even though they're not human, and therefore qualify as a person. On the other side of the continuum one of the implications, and you might have members of the human species that aren't persons and so anencephalic children, children born with very little above the brain stem in terms of their brain structure are often given as an example. They're clearly members of the human species, but their ability is to have kind of capacities most people think matter are relatively few and far between. So you get into this uncomfortable position where you might be forced to recognize that some humans are non-persons and some nonhumans are persons. Now again, if you bite the bullet and say I'm willing to be a speciesist, being a member of the human species is either necessary or sufficient for being a person, you avoid this problem entirely. But if not you at least have to be open to the possibility that artificial intelligence in particular may at one point become person like and have the rights of persons.
And I think that that scares a lot of people, but in reality to me when you look at the course of human history and look how willy-nilly we were in declaring some people non-person sort of a loss, slaves in this country for example, it seems to me a little humility and a little openness to this idea may not be the worst thing in the world.