Will robots rule the world?
Will artificial intelligence ever be sentient?
In a brief video introducing a panel discussion on the future of AI, Hod Lipson, an engineering professor at Columbia and one of the world’s top experts on robotics, has no doubt about the answer.
“Yes,” he said. “And there’s no reason to believe that human sentience is the ultimate sentience possible.”
It was a great set-up for the four panelists at a presentation at New York University titled, Teach Your Robots Well: Will Self-Taught Robots Be the End of Us? In a fascinating discussion before a packed auditorium, the scientists at times seemed to agree with Lipson’s outlook, at times to disagree. Their answers depended on how one defines words like “sentience,” “intelligence,” and “consciousness.”
Max Tegmark, AI researcher and President of the Future of Life Institute at MIT, defined intelligence as “the ability to accomplish complex goals.”
Moderator Tim Urban, writer and founder of Wait but Why, pushed the question a little further: “Is artificial intelligence the same as intelligence?”
The kind of AI currently in mass usage—like Siri, Cortana, Alexa, and Google—is not yet on the same level as human intelligence. It is what panelist Peter Tse of Dartmouth calls “artificial narrow intelligence” as opposed to “artificial general intelligence.”
Tse, a leading researcher in how and if matter can become conscious, explained the difference: narrow AI would be like a robot learning how to fly a plane or drive a car, while general AI would include knowledge on how to fly a plane and drive a car . . . and mow the lawn and babysit the kids and cook the dinner and even have ability to learn.
Still, narrow AI has great potential. The panel predicted that within a decade or so, narrow AI will give us roads with mostly self-driving cars and “robot doctors” delivering much better medicine in diagnostics and treatment. One panelist predicted that in the near future, children will ask their parents, “Do you mean an actual human diagnosed you when you were sick? And that you actually drove cars and operated heavy machinery yourself?”
But what about creativity? AI has been put to the test in painting, composing music, and even writing a screenplay—all with mixed results, mostly lacking in excellence and genuine human emotion. (A video of an awful scene from a robot-written screenplay was met with derisive howls from the audience.)
The panel became especially animated when discussing the possible future of AI and sentience: Will artificial intelligence someday desire to take over the world? Will the robots turn against us, as we’ve seen in so many sci-fi movies?
Yann LeCun, an AI scientist and a prof at NYU, doesn’t think so. “The desire to take over is not actually associated with intelligence,” he said—and the audience chuckled as the word “Trump” was whispered throughout the auditorium. “If you are stupid, you want to be the chief.” (More laughter.)
LeCun surmised that AI will never be that “stupid” and thus will have no desires to rule the world: “It’ll be more like C-3PO than the Terminator.” (LeCun believes that most AI and robot movies envision a worst-case scenario “because movies are more interesting when bad things happen. But most movies get it completely wrong.” He singled out Her as a rare example of a film getting it right.)
Tse was more pessimistic than LeCun, arguing that if AI were ever to develop consciousness, it would have just as much capability for evil as we humans do. Tegmark warned, “If we can’t figure out to make AI a good thing for everyone, then shame on us. We need to learn how to make machines line up with and understand our goals.”
LeCun speculated that if a “superintelligent generalized AI” goes rogue, intent on evil, that humans can create a “specialized AI whose only role is to destroy the bad ones. And the specialized one will win every time.”
In a pre-event interview with ORBITER (we’ll publish it at a future date), panelist Susan Schneider, Director of the AI, Mind and Society (AIMS) Group at UConn, was mostly optimistic about AI’s potential. But near the end of the panel discussion, she joked, “After this panel, I’m actually more afraid of the possibility of bad things.”
Schneider had the panel’s last word when she cited Elon Musk, who believes AI will someday be more of a danger to the world than nuclear weapons. She said Musk believes we need to essentially “upload” AI into the human brain, because supplementing our own intelligence is the only way to stay ahead of the artificial variety.
Schneider was clear that she was vehemently against such an idea—of making our brains half-machine, half-human. The audience applauded loudly, and the panel ended.
Somewhere, C-3PO was probably clapping too.
The post Will Robots Rule the World? appeared first on ORBITER.