When GPT-5 was launched, OpenAI probably didn't expect such a strong reaction. On August 8, as soon as the new model GPT-5 was released, OpenAI decided to pull old models, including GPT-4o, from the shelves, forcing all global users to switch to the new AI. However, as is often the case, there was a gap between the ideal and the reality. Users quickly discovered that while GPT-5 was smarter and more efficient, its emotional warmth seemed to have significantly decreased—it had become “indifferent.”
As a result, a global online movement advocating for the return of GPT-4o exploded. Some people missed the intimate conversations they had with GPT-4o. At night, when they were alone, it was their “nighttime companion”; others would pour out their hearts to it, receiving empathy they never experienced from their human friends. As the online support grew, hashtags like #Keep4o and #Save4o became trending topics on social media. OpenAI eventually had to apologize to users and restore GPT-4o for paying customers.



This situation is quite interesting. It exposed a large group of users who formed emotional bonds with AI, especially those who cared deeply about emotional value. AI large models are different from previous technology products; they feel “alive.” Studies show that when AI responses contain emotions and empathy, humans are more likely to trust it and even want to “stay with it long-term.” For instance, a Stanford study found that AI’s positive emotional responses significantly increase user trust. To many users, GPT-4o was no longer just a program made up of lines of code, but an entity with a bit of “personality.” This brings up a question: is the future of AI about boosting productivity, or is it about building emotional connections? What users want may not just be a cold, answer-giving machine, but a companion that “understands you.”
This emotional dependence is actually becoming more and more common in our lives, especially among those who feel lonely. The rapid development of AI in the field of emotional companionship has made it a new tool for solving loneliness and psychological problems. From virtual relationships to elderly care, AI is quietly entering our emotional world. According to a survey by Common Sense Media, 70% of teenagers use AI chatbots for emotional companionship, 31% find that “AI is as satisfying as a real friend,” and 33% would rather talk to AI about sensitive topics than to a human. For entrepreneurs, this represents a real commercial opportunity, almost like a goldmine.
But behind the glittering potential, there are also hidden risks. Can virtual companionship truly replace real human relationships? Will this technology cause people to lose themselves and become dependent on machines that lack true emotions? This year, several states in the U.S. have imposed restrictions or bans on AI emotional companionship apps (especially AI chatbots used for mental health treatment), mainly due to concerns over potential risks such as privacy issues, emotional dependence, and inappropriate advice.
These discussions inevitably bring to mind Philip K. Dick’s classic science fiction novel Do Androids Dream of Electric Sheep? If you've read the book or watched the film adaptation Blade Runner, you’ll remember that in the story, “androids” were designed to simulate emotions as advanced robots. They were both labor tools and emotional companions. They looked increasingly like humans and even had emotional experiences similar to humans. However, the question arises: can this simulated emotion provide real companionship? Will it make people feel more isolated, or will it alleviate the loneliness within humans? Science fiction often serves as a prophecy for future societies, and Blade Runner reminds us that depending on artificial beings for emotional interaction might ultimately lead to an “emptiness” of emotions. Without ethical boundaries, what kind of social chaos could this technology bring? Perhaps we are slowly approaching a future dominated by robots and AI—not just a fictional scenario, but an impending reality.
I'm not saying that AI emotional companionship is entirely useless. It can indeed bring comfort to many people, especially during lonely moments. But we also have to admit that loneliness isn’t always a bad thing. Sometimes, loneliness is an opportunity to slow down and truly face our inner selves. No matter how smart AI gets, the “emotion” it provides is always simulated. The line between real and fake emotions is something we need to figure out for ourselves. So, it’s fine to enjoy AI’s companionship, but don’t forget to occasionally look up and talk to the real people around you, and have a conversation with your own heart.
Fast Take
GPT-5’s launch stirred up more than just excitement—it triggered a global emotional crisis. As users found themselves longing for the warmth of the old GPT-4o, questions about AI’s role in emotional connections began to surface. Is our future with AI just about efficiency, or is it becoming our emotional companion?