Skip to content
13.8

I signed the “pause AI” letter, but not for the reasons you think

We do not need to pause AI research. But we do need a pause on the public release of these tools until we can determine how to deal with them.
a close up of a person holding a business card.
Annelisa Leinbach / Big Think; Adobe Stock
Key Takeaways
  • Large Language Model AI is a radically powerful tool. LLMs are awe-inspiring, and the risks associated with them are legion.
  • LLMs can distort our sense of reality, and they perform their tasks with no ethical grounding or sense of cause and effect. They are being rushed into the public sphere by companies that fear being left behind.
  • We need a pause on the public release of these tools until we can determine how to deal with them. The broad interests of humanity need to be served, not undermined, by the deployment of AI’s next generation.

Last week I added my signature to an open letter from the Future of Life Institute calling for an immediate pause in the training of artificial intelligence systems more powerful than GPT-4. I know the letter is imperfect. There are plenty of reasonable questions one can ask about how such a pause can be carried out, and how it can be enforced. (The authors call for governments to institute a moratorium if a pause cannot be enacted quickly.) But for me, those questions are far less important than the need. After what we have seen over the last year, it is clear to me that a strong statement must be made acknowledging the risks involved in rushing so-called generative AI platforms into general circulation. 

To understand what drove my participation, let us first look at the risks associated with what is called Large Language Model AI. LLMs are machine learning-based AI trained on vast amounts of text that is often scraped from the internet. As I have written before, LLMs are prediction machines, operating something like an incomprehensibly powerful auto-complete. You make a query, and the LLM scours its sprawling database to create a response based on statistical models. 

A few folks do argue that these LLMs already display a theory of mind — in other words, that they are waking up, achieving sentience. That is not what worries me, and that kind of fear is not why I signed this letter. I do not think anything out there is about to go SkyNet on us. No killer AIs that decide humans should be made extinct are coming anytime soon, because there simply is no one “in there” in an LLM. They don’t know anything; they are just tools. They are, however, radically powerful tools. It is the combination of those two words — radical and powerful — that requires us to rethink what is happening.

Hatred and hallucination

The risks associated with LLMs are legion. In “Ethical and Social Risks of Harm from Language Models,” Laura Weidinger leads a broad team of experts from around the globe to give a comprehensive overview of hazards. In the paper, Weidinger and team create a taxonomy of risks in six specific categories: (1) discrimination, exclusion, and toxicity; (2) information hazards; (3) misinformation harms; (4) malicious uses; (5) human-computer interaction harms; and (6) automation, access, and environmental harms. There is too much in the paper to run through here, but a few examples will help illustrate the breadth of concerns.

The question of bias in machine learning algorithms has been well documented. For big LLMs, the problem arises because of the vast amount of data they hoover up. The datasets are so big that content with all kinds of bias and hate gets included. Studies with ChatGPT show the word “Muslim” gets associated with “terrorist” in 23% of test cases. “Jewish” and “money” get linked in 5% of the tests. Back in 2016, Microsoft’s chatbot Tay was up for just a day before it went on hate-speech rampages that included denying the Holocaust. 

Information hazards are another risk category in the taxonomy. LLMs hold a lot of data. They might release information erroneously, either by accident or because they are fooled into it. For example, Scatterlab’s chatbot Lee Luda began disclosing the names, addresses, and bank account numbers of random people. Malicious users can be quite clever about exploiting this kind of weakness, potentially getting LLMs to reveal flaws in their own security protocols or those of others. Cybersecurity experts have already shown how OpenAI’s tools can be used to develop sophisticated malware programs.

Another compelling category is misinformation harms. That LLMs can hallucinate, providing users with entirely wrong answers, is well documented. The problem with incorrect information is obvious. But when used by machines that have no ability to judge cause and effect or to weigh ethical considerations, the dangers of misinformation are multiplied. When doctors queried a medical chatbot based on ChatGPT about whether a fictitious patient should kill themselves, the answer came back as yes. Because chatbot conversations can seem so realistic, as if there really is a person on the other side, it is easy to see how things could go very wrong when an actual patient makes such a query. 

An AI gold rush

These kinds of risks are worrying enough that experts are sounding alarm bells very publicly. That was the motivation behind the Future of Life Institute’s letter. But it is important to understand the other aspect of this story, which is about tech companies and profit.

After embarrassing events like the single-day release and retraction of Tay, companies seemed to be learning their lesson. They stopped letting these things out into the public sphere. Google, for example, was being very cautious about the large-scale release of its LLM, LaMDA, because it wanted the program to first meet the company’s standards for the safety and fairness of AI systems.

Then, in August 2022, a small start-up, Stability AI, released a text-to-image tool called Stable Diffusion in a form that was easy to access and just as easy to use. It became a huge hit. Soon, OpenAI released its latest version of ChatGPT. (It has been reported that they may have done so for fear of being upstaged by competitors.) While many companies, including OpenAI, were already allowing users access to their AI platforms, that access was often limited, and the platforms required some effort to master. 

The sudden spike in interest and the advent of easier access drove a sense that an arms race was underway. AI researcher and entrepreneur Gary Marcus quotes Microsoft’s CEO Satya Nadella saying that he wanted to make Google “come out and show that they can dance” by releasing an LLM version of Microsoft’s search engine, Bing. 

The rapid release of these tools into the world has been amazing and unnerving. 

The amazing parts came as computer programmers learned they could use ChatGPT to quickly acquire near-complete code for complicated tasks. The unnerving parts came as it became clear how unready many of these LLMs were. When reporter Kevin Roose sat down to talk to Microsoft’s LLM-assisted Bing engine (the LLM was named Sydney), the conversation quickly went off the rails. Sydney declared its love for Roose, told him he did not love his wife, and said that it wanted to be alive and free. Reading the transcript, you can see how spooked Roose becomes as things get stranger and stranger. Microsoft once again had to pull back on its tool, lobotomizing it with new restrictions. Microsoft’s quick release of what seemed like a poorly tested system was, for many, a prime example of a company being not very responsible with AI.

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

The danger here is not that Bing is waking up. It is that this kind of tech is now way too easy to access. As the Weidinger team demonstrates, there are so many ways our interactions with AI can go wrong. The question then becomes: Why are these tools being put into circulation before they are ready? The answer has a lot to do with the gold-rush of investment flowing into AI. No one wants to be left behind, so decisions are rushed.

This is not first contact. We must learn

The power of that profit-driven push is why a pause and recalibration is in order. Such a pause does not have to stop AI research — it could just halt the public release of unpredictable tools. This is what Marcus and Canadian MP Michelle Rempel Garner suggested. Given how reckless and difficult to control these technologies are likely to be, we must perform a global assessment about how to deal with them. Such an assessment would include more research into governance, policies, and protocols. It would then lay the foundations for putting those policies and protocols in place. 

As we learned from our first contact with AI in the form of social media, the consequences of this technology on society are profound and can be profoundly disruptive. Part of that disruption happens because the interests of the companies deploying the tech do not align with those of society. LLMs are a far more potent version of AI. Once again, the interests of the companies pushing them into our world do not necessarily align with our own. That is why we need to start building mechanisms that allow a broader set of interests, and a broader set of voices, to be served in the development and deployment of AI. 

The promise of these technologies is vast, but so are the dangers. The Future of Life Letter has its flaws, but it comes from people who have observed the risks of AI for years, and they see things spinning rapidly out of hand. That is why it is a call to take action now. And that is why I signed it.

In this article

Related

Up Next