This content is locked. Please login or become a member.
The High Cost of Disillusionment
I’ve been in the field of machine learning for more than thirty years. I love it. I think it’s amazing. But at the same time, I’m very conflicted by a certain popularity that it has under the guise of AI, which I believe is broadly very misleading. There’s kind of an illusion with generative AI because it’s so seemingly human-like, something like chatGPT, a large language model. It is capable of communicating about any topic and often giving responses that seem to understand what you’re saying.
Underlying the excitement is the idea that we are moving steadily towards and potentially very near AGI. So that’s artificial general intelligence. AGI is simply a computer that can do anything a person can do. And I grant that on some level, it has captured understanding and the meaning of words and phrases and sentences and paragraphs. I do not believe that that represents a concrete step towards AGI. I’m not saying that it’s theoretically impossible, but I believe that the difference between what it can do and what humans can do is going to become increasingly apparent. There’s a much bigger difference in distance between what it can do now and the idea of general human-level capabilities.
The idea that the computer could run autonomously, to do most anything that a human can do, you know, that you could onboard it like a human employee and let it rip autonomously. That’s a bit of a ghost story. It’s hyperbole. It’s hype. It’s a mismanagement of expectations, and that’s a problem because it’s going to lead to disappointment. What people in innovation call disillusionment, where you get super excited and then everyone gets you know, maybe over-invests and gets these wild expectations. And then the disillusionment is so painful and so costly, it’s not going to be fun for anybody. And when that happens, you kind of throw the baby out with the bathwater. The potential value of generative AI and of predictive AI, those predictive use cases of machine learning, which are both very real, are going to lose their potential because everyone’s going to kind of stigmatize. That’s the nature of that kind of disillusionment.
Practicing Healthy Skepticism
I think it’s really important to be realistic. Keep a skeptical eye on the leaders of these large big tech companies and OpenAI, Microsoft, Google, they all have a lot of incentives to present this as a panacea, as approaching AGI. We should tap the brakes a little bit. It’s absolutely critical that we avoid the pitfall of effectively anthropomorphizing today’s computers or where we think computers are going to be in the relatively near term.
The antidote to hype is simple. Focus on concrete value. Discover whether you’re using generative AI or predictive AI. Determine a very specific, concrete, credible use case of exactly how this technology is going to improve some kind of operation in the enterprise and deliver value. Then you’re talking about real value. If you wanted to sort of explore how close is it to the human mind and why you think it might be getting there, that’s kind of a philosophical conversation. But if you’re talking about sort of improving efficiencies of operations, I think we should be a lot more practical and less pie in the sky.