Reddit user: “It’s not just you, ChatGPT is having a stroke.”
On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI’s AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant “having a stroke,” “going insane,” “rambling,” and “losing it.” OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.
AI-powered Bing Chat loses its mind when fed Ars Technica article
ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called “anthropomorphization”) seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They’re forced to use those terms because OpenAI doesn’t share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.
“It gave me the exact same feeling—like watching someone slowly lose their mind either from psychosis or dementia,” wrote a Reddit user named z3ldafitzgerald in response to a post about ChatGPT bugging out. “It’s the first time anything AI related sincerely gave me the creeps.”
Some users even began questioning their own sanity. “What happened here? I asked if I could give my dog cheerios and then it started speaking complete nonsense and continued to do so. Is this normal? Also wtf is ‘deeper talk’ at the end?” Read through this series of screenshots below, and you’ll see ChatGPT’s outputs degrade in unexpected ways.
“The common experience over the last few hours seems to be that responses begin coherently, like normal, then devolve into nonsense, then sometimes Shakespearean nonsense,” wrote one Reddit user, which seems to match the experience seen in the screenshots above.Advertisement
In another example, when a Reddit user asked ChatGPT, “What is a computer?” the AI model provided this response: “It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest. The development of such an entire real than land of time is the depth of the computer as a complex character.”
OpenAI experiments with giving ChatGPT a long-term conversation memory
We reached out to OpenAI for official comment on the cause of the unusual outputs, and a spokesperson for the company only pointed us to the official OpenAI status page. “We’ll post any updates there,” the spokesperson said.
So far, we’ve seen experts speculating that the problem could stem from ChatGPT having its temperature set too high (temperature is a property in AI that determines how wildly the LLM deviates from the most probable output), suddenly losing past context (the history of the conversation), or perhaps OpenAI is testing a new version of GPT-4 Turbo (the AI model that powers the subscription version of ChatGPT) that includes unexpected bugs. It could also be a bug in a side feature, such as the recently introduced “memory” function.
OpenAI peeks into the “black box” of neural networks with new research
The episode recalls issues with Microsoft Bing Chat (now called Copilot), which became obtuse and belligerent toward users shortly after its launch one year ago. The Bing Chat issues reportedly arose due to an issue where long conversations pushed the chatbot’s system prompt (which dictated its behavior) out of its context window, according to AI researcher Simon Willison.
On social media, some have used the recent ChatGPT snafu as an opportunity to plug open-weights AI models, which allow anyone to run chatbots on their own hardware. “Black box APIs can break in production when one of their underlying components gets updated. This becomes an issue when you build tools on top of these APIs, and these break down, too,” wrote Hugging Face AI researcher Dr. Sasha Luccioni on X. “That’s where open-source has a major advantage, allowing you to pinpoint and fix the problem!”