
Image generated using ChatGPT
OPINION: GPT-5’s Rocky Debut And The Limits Of AI Hype
GPT-5 was billed as the most advanced AI model ever released, a technology that would bring us closer to AGI. But when it finally arrived, many users didn’t feel the big leap they were promised. Some even missed and grieved the previous model.
For months, people from all over the world had been waiting for GPT-5. Every time OpenAI shared an update or a new feature, social media users rushed to comment, “But what about GPT-5?” or “Why not just focus on getting GPT-5 out?”
Millions were eager to try this breakthrough technology, hoping it would take us closer to the long-anticipated Artificial General Intelligence (AGI).
Finally, OpenAI did it.
On August 7—just a few days ago—the company introduced its latest model in a livestream.
“Today, finally, we are launching GPT-5,” said the CEO, Sam Altman, from the company’s studio. “GPT-5 is a major upgrade over GPT-4, and a significant step along our path to AGI.”
Altman promised that users would love GPT-5 even more than previous models. But reality hit hard. Almost immediately, users began to complain—some outright disliked the new model.
What went wrong?
A Rocky Debut With User Backslash
OpenAI expected that its 700 million users worldwide would be impressed and amazed after interacting with GPT-5. Instead, many quickly spotted problems: the model had a noticeably different personality, it felt colder than GPT-4o—a model that was removed from the platform—and it still hallucinated. To make matters worse, cybersecurity experts managed to jailbreak it within hours.
The advanced model behaved in a different way from the previous model. In an effort to simplify the platform, OpenAI removed all previous models, causing millions of users to panic. Others criticized the interface and visuals, while security researchers exploited weaknesses using fictional narrative tricks.
But the biggest—and most viral—complaint was about something more emotional: the loss of their “sycophant friend,” GPT-4o.
Users Grieving GPT-4o
ChatGPT enthusiasts were excited and ready to test the new AI model, but while many were impressed and happy with the results, others were having a panic attack: their “best friend” was gone.
GPT-4o’s personality had often been criticized. It was seen as too flattering and too “sycophant-y and annoying”—as its own creator Altman described it. OpenAI made a few adjustments, but the AI system remained accommodating.
With GPT-5, OpenAI aimed for something “better”: more objective, more direct. What they didn’t expect was the depth of attachment people had to GPT-4o’s softer side. Its sudden removal left some users genuinely upset.
Just a few days after GPT-5’s launch, OpenAI had to bring back GPT-4o for paying users and promised that next time they would remove a model, they would warn users in advance.
“One day, I had deep, human-feeling conversations that helped me grow and laugh. The next day, gone,” wrote one user in a viral thread on Reddit.
“GPT‑4o wasn’t just ‘better’, it actually helped. It listened, it adapted, and for so many of us, it made things a little less heavy. Losing that? It hurts, you should never be forced to pay for a connection,” wrote another Redditor.
Altman later stated that fewer than 1% of users were “attached” to GPT-4o. But that small percentage caused enough of a stir to force action within days. Could just 1% really make such an impact, or are more users quietly forming bonds with chatbots?
Never knew GPT-4o was loved by so many people. pic.twitter.com/V4dSLGR8eS
— AshutoshShrivastava (@ai_for_success) August 11, 2025
“Making A.I. chatbots less sycophantic might very well decrease the risk of A.I.-associated psychosis and could decrease the potential to become emotionally attached or to fall in love with a chatbot,” said Dr. Joe Pierre, a professor of psychiatry at the University of California who specializes in psychosis, to the New York Times. “But, no doubt, part of what makes chatbots a potential danger for some people is exactly what makes them appealing.”
Yes, they “screwed up some things”
At a dinner meeting with journalists in San Francisco—about a week after GPT-5’s release—Altman admitted that the level of user attachment had caught OpenAI off guard.
“I think we totally screwed up some things on the rollout,” said Altman at the meeting, as reported by the New York Times.
Altman stressed that only a small percentage of people develop deep emotional ties to the technology. Still, GPT-5’s launch made the company realize there was a meaningful group of users who remained strongly attached to the personality of the previous flagship model.
“There are the people who actually felt like they had a relationship,” explained Altman. “And then there were the hundreds of millions of other people who don’t have a parasocial relationship with ChatGPT, but did get very used to the fact that it responded to them in a certain way and would validate certain things and would be supportive in certain ways.”
GPT-5 Did Not Meet The Hype
In more technical areas, GPT-5 also fell short of expectations. The new flagship model was meant to be an incredible PhD-level friend with a willingness to support you and provide all the answers you need. Yet, the “significant leap in intelligence” was far from obvious.
As reported by The Verge, users noted multiple hallucinations and errors: claiming “blueberry” has three “b’s,” mislabeling U.S. states, and encountering strict limitations in reasoning.
GPT-5’s full potential is still unfolding. Many agree that the model’s strongest skill lies in coding, and future updates may improve performance. But one thing is clear: for most users, GPT-5 did not create the sense of a dramatic technological leap that had been promised.
A Big Lesson On AI’s Limits
Where was GPT-5 when OpenAI decided to launch its most advanced model and remove the previous versions? Was it not consulted, or did it give the wrong advice? Is this really all the “best AI system yet” can do?
The fact that one of the world’s most influential companies—armed with cutting-edge technology and massive resources—can still “screw up” and fail to deliver on one of its most anticipated products offers more than just comfort for us mere mortals. It’s a clear reminder: AI has limits, even in 2025.
Many agree that GPT-5 demonstrates just how far we are from true AGI. Even if GPT-5 excels at coding and outperforms previous models on benchmarks and tests, in everyday use, it remains comparable to GPT-4o—just colder.