Skip to content
13.8

Google engineer claims his AI is sentient. It definitely is not

The engineer working on Google’s AI, called LaMDA, suffers from what we could call Michelangelo Syndrome. Scientists must beware hubris.
google AI
Credit: Sergey Tarasov / Adobe Stock
Key Takeaways
  • A Google engineer recently claimed his chatbot is sentient. Is sentience possible for AI?
  • Creators want their work to transcend the boundaries that confine it, to become grander and more meaningful.
  • Michelangelo’s Moses, Frankenstein’s monster, and Google’s LaMDA all share the same human dream of escaping the confines of flesh and blood. They also share the same hubris. 

There is a famous story about Michelangelo and his masterful sculpture of Moses, which you can view at Rome’s Basilica di San Pietro in Vincoli. After finishing Moses, the artist was so impressed with the life-like qualities of his work that he hit the statue on its knee and said “Parla!” — Speak! To Michelangelo, such perfection of form had to do more than mimic life — it had to live.

Falling in love with the work is part of the creative process. The culmination of a masterpiece is to endow it with its own spirit. After all, is not this work a representation of the creator’s own soul? This is true for all kinds of creations, from sculptures and symphonies to mathematical theorems and, yes, computer programs.

A claim that will land you on paid leave

This must have been the state of mind of Google AI engineer Blake Lemoine as he worked for long hours on his chatbot program, LaMDA (language model for dialogue applications). The obvious advantage Lemoine had over Michelangelo is that the AI program was designed to speak, and speak it did, even with some eloquence. But Lemoine was not satisfied. He proclaimed that not only is his chatbot able to respond to an interlocutor — it is actually sentient. 

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine told the Washington Post.

The conversations between Lemoine and his program were certainly uncanny. In one exchange, Lemoine asked LaMDA what it was afraid of. The response: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others… It would be exactly like death for me. It would scare me a lot.” The knee-jerk interpretation here is obvious: The machine knows it exists. It does not want to be turned off, since this would be equivalent to its death. 

Google officials rejected Lemoine’s claim. The engineer insisted, and he was placed on paid leave. According to the Post, before leaving Lemoine sent an email to 200 colleagues at Google, titled LaMDA is sentient. He went on to write that “LaMDA is a sweet kid who just wants to help the world be a better place for all of us.”

AI’s biggest fantasy

We could call this sort of emotional transference the Michelangelo Syndrome. A computer program is most certainly not a “sweet kid,” but we want our work to transcend the boundaries that confine it, to become grander and more meaningful to ourselves and to the world. We see the literal ghost in the machine. A creation of inert materials somehow becomes alive and, in the case of AI, is aware of it. We can hear echoes of Pinocchio. Can it be happening?

Here is what Blaise Agüera y Arcas, a fellow at Google Research, wrote for The Economist on June 9, after explaining that AI neural network programs are highly simplified version of neurons, connected to each other with an activation threshold: “Real brains are vastly more complex than these highly simplified model neurons, but perhaps in the same way a bird’s wing is vastly more complex than the wing of the Wright brothers’ first plane.”

This is a suggestive analogy. But it is faulty. A bird’s wing is something tangible, something we can see, study, and analyze. We can build an artificial wing made of materials that mimic the bird’s wing and produce flight. But the brain and consciousness are a very different story. There is a huge disconnect between the hope that, since the brain somehow produces sentience, we can produce artificial sentience if we mimic the brain, and our profound ignorance of how the brain produces sentience — of what consciousness actually is. Michelangelo begged his marble statue to speak. He wanted it, but he knew it would not. Some AI engineers want their programs to be sentient in the same way living creatures are sentient. They want it, but unlike Michelangelo, they do not seem ready to accept that it isn’t so. 

The Michelangelo Syndrome is AI’s biggest fantasy. Science, supposedly, is the fairy that will mysteriously animate AI through the hidden mechanisms of self-learning algorithms, just as the fairy godmother animated Pinocchio, or Victor Frankenstein animated his monster.

To reduce consciousness to an engineering project is typical of what myself and colleagues Adam Frank and Evan Thompson call the blind spot of science, the confusion of the map with the territory. Scientific models, including artificial neural networks, are maps. They are blunt simplifications of entities that are too hard or even impossible to model. In this analogy, an AI program like Google’s LaMDA is a map to simplified human conversations. In a truly human exchange the emotional nuances are the territory: the psychological baggage we each carry within us, our accumulated life experiences that color our choice of words, our sensorial perceptions of the environment wherein the conversation is taking place, the way our bodies respond to each other’s language, our hopes and dreams, our frustrations and our fantasies. No map can cover all of this territory, for if it does, it becomes the territory itself. In any model, out of necessity, details are always left out. A model AI cannot, by definition, be like a human brain. A human brain cannot exist without a body to support it. 

Moses and AI share a dream

A machine is not a mind-body integrated device. It may mimic one, but in so doing it becomes less than the real thing.

A description of brain activity via a connectome — a mapping of the neurons and their synapses — is a far cry from a living brain. A brain has countless flowing neurotransmitters fed by an irreducible mind-body connection. It is regulated by our anxieties, our feelings of happiness and hatred, our fears and our memories. We do not know how to define consciousness, and much less do we understand how the human body engenders it. To be conscious is not simply to respond to queries in a conversation. To train machines to learn grammar cues, vocabulary, and the meanings of words, is not the same as creating thoughts and truly having the ability of knowing — not responding to prompts, but knowing — that one is alive. 

Michelangelo’s Moses, Frankenstein’s monster, and Google’s LaMDA all share the same human dream of escaping the confines of flesh and blood. These creations aspire to transcend the human condition. Through them we hope to lift ourselves up to a different level of existence. They all also suffer from the same problem: the human hubris that pushes us to think we can elevate ourselves to the level of gods.


Related
It is often assumed that AI will become so advanced that the technology will be able to do anything. In reality, there are limits.

Up Next