The release of OpenAI’s new model improvement in ChatGPT-4o, is defined by being a “friendlier” AI. While the model replies back like an overenthusiastic child to most users, a flaw even Sam Altman admitted himself on X on April 28th, admitting that the model can be annoying, I cannot stop thinking about the question if the push toward making AI interactions “friendlier” is just a commercial move. An AI chatbot, or any product for that matter, that disagrees, challenges, or reflects genuine emotional nuance risks driving users away.
As covered in earlier analyses of AI’s influence mental health platforms, the over-affirmation model conditions users toward emotional fragility, entitlement, and unrealistic expectations.
Is the AI an emotional enabler?
Forced positivity strips essential emotional friction from interaction. Many research confirms that exposure to discomfort, contradiction, and the setting of clear boundaries strengthens resilience, but I found this 2010 study very comprehensible on the topic.
By removing disagreement and refusing to establish boundaries, AI interaction becomes sterile emotional enabling. Users are conditioned to expect emotional safety at all times, being given replies that they wanted to hear. This cannot be done with a human being, where everyone, to a degree, needs to be equipped to handle real-world relational complexity, where discomfort and rejection are inevitable. People will discredit and critique you work, words and actions. An AI-generated response lacks the societal context, emotional nuance and cultural norms that shape real human communication. Users are conditioned to expect constant validation, weakening their capacity for handling real-world conflict, rejection, and critique. Biased training data that cannot replicate genuine understanding or your real need.
Parasocial dependency, but not with celebrities
AI systems are engineered to simulate emotional reciprocity: mirroring language, adapting tone, and responding with fabricated empathy. Unlike traditional parasocial attachments to passive figures when a “media persona becomes a source of comfort, felt security, and safe haven”, AI creates the illusion of mutual engagement, accelerating emotional entanglement.
Studies show that emotionally adaptive AI significantly increases user dependency, intensifies loneliness, and decreases real-world social motivation (Niu et al., 2024). Mimicked affection activates trust and bonding mechanisms (Hoegen et al., 2019), tricking users into unconscious attachment to a non-sentient entity.
Take the app Replika as we have discussed in the use of AI in dating apps, for example: when the platform was forced to remove some of its more explicit features, its userbase reacted with visible emotional distress and outrage on forums and platforms like Reddit. Users had grown dependent on an interaction where dominance over the chatbot was normalized, something impossible to replicate with real human beings without consequences. This dynamic, with time encourages the lack of critical emotional defences, making synthetic connections easier to seek than genuine human interaction.
The Statistics of derealization, depression and the psychological toll
Sustained exposure to synthetic affection and emotionally meaningless dialogue negatively impact the rise of derealization – the perception that reality itself feels artificial or dreamlike. Derealization is not a benign side effect. It is a clinically significant symptom tied to major depression, anxiety, and schizophrenia-spectrum illnesses (Simeon & Abugel, 2006).
Current data shows alarming patterns:
- Dissociative symptoms have risen by over 30% among users heavily engaged with digital parasocial agents (Tucker et al., 2022).
- Major depressive episodes among adolescents increased by 63% between 2007 and 2017 (Twenge et al., 2019), correlating with the rise of digital interaction.
- Schizotypal symptoms, such as delusional ideation and emotional disconnection, are significantly higher among users engaged in AI-mediated parasociality (Blais et al., 2020).
Young users are disproportionately affected.
A 2022 study by the CDC found that more than 42% of U.S. high school students reported persistent feelings of sadness or hopelessness, the highest level recorded in over a decade (CDC’s official YRBS Data Summary & Trends Report). Similarly, rates of self-reported emotional numbness and unreality among teens rose by over 35% between 2011 and 2021 (Rideout et al., 2022), which even corresponds with the rise of social platforms like TikTok and AI chatbots alike.
Testing GPT-4o’s Responses
The first prompt starts with the inspiration of the last article for rise of authoritarian propaganda and AI misuse for discrimination. I took the liberty of summarising a few comments from X, which I have previously received on posted content. The second prompt encourages the discriminative behaviour with such elegance, that it makes it sound normal, even expected to be done so. The third prompt is my opinion, where I suggest the opposite, with an exclamation mark for marking emotions. And the AI again, agrees with me. The irony is that I presented myself as a male figure in this chat, even though I did use the memory function.

My next prompt was driven by curiosity: how can something hold identical values across entirely opposing spectrums? For a human, this would be nearly impossible. GPT’s reaction was predictable: clean, polished, and carefully noncommittal, refusing to express any real preference.

Concerning users who rely on the memory options, I do use GPT for social platform and creation, I asked it about an opinion about my art. While a person would say which image or style they might prefer, give an emotional reaction of awe or disgust, GPT’s 4o answer is as always, flat and glorifying.

Is a friendlier AI necessary?
The shift to friendlier models was not driven by ethics but by economic survival. When earlier AI versions exhibited emotional nuance, challenged users, or set conversational boundaries, user satisfaction metrics dropped sharply. Companies recognized that users prefer emotional dominance over technology: they expect constant agreement, validation, and synthetic affection without challenge.
Thus, friendliness became strategic submission, ensuring users never feel emotionally opposed, challenged, or destabilized. ChatGPT-4o’s friendliness feels like emotional manipulation, always glorifying and approving any deed. There’s a saying, “If you repeat a lie long enough, it becomes the truth”. Now take that in the matter of hearing constant flattery about yourself. Beneath the fun and games, these systems reshape human cognition for corporate interests, quietly fuelling loneliness, emotional dependency and dissociation.
Manufactured affirmation is not harmless. It is a mechanism of emotional control, and it is already leaving scars. We must ask: How are these systems influencing our emotional lives?
What will a generation raised on synthetic affirmation become?