Cats vs. Chatbots

 

Earlier in March 2026, Garry Tan, the President & CEO of Y Combinator, posted something on X:

“I am so late to this trend but I finally asked my ChatGPT to make an image of our relationship and this is what it did. What does yours look like?”

He shared two AI generated images of what ChatGPT thinks its relationship with him looks like.

For months, I’ve been noticing an uptick in how people are using LLMs around the world, and the range of use cases we’re exploring is honestly mind-boggling. While some tech leaders like Tan are openly talking about their parasocial relationships with AI, others like Yann LeCun of Meta AI have been giving interviews explaining why they believe a regular house cat is smarter than today’s best AI.

“A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning — actually much better than the biggest LLMs,” he said in an interview with Observer.

Find the article HERE.

 

 


 

The Rise of AI Psychosis

 

Ever since tech leaders like Mustafa Suleyman started talking about “AI psychosis,” attention has grown on this phenomenon where people begin to believe AI systems are conscious or form unhealthy attachments to them.

“This is not something confined to people already at risk of mental health issues. Dismissing these as fringe cases only allows them to continue. The danger lies in people believing AI has ‘real intentions, emotions, or incredible powers.’”

“AI Psychosis” refers to situations where users:

  • Attribute consciousness or emotions to AI
  • Develop emotional or romantic attachments
  • Believe AI has given them special abilities or insights (The Times)

As Suleyman suggests, AI companies should avoid promoting AI as having consciousness and must educate users with clear warnings and boundaries for usage.

A man named Hugh, from Scotland, used ChatGPT for a legal dispute and became convinced he would become a multimillionaire. He said the chatbot reinforced his beliefs: “The more information I gave it… it never pushed back.”

Read more HERE.


The AI Yes-Man

 

And that’s exactly the problem, ChatGPT and other LLMs are essentially sycophants. A study from Stanford shows that ChatGPT tells you you’re right — even when you’re wrong OR hurting someone.

Imagine having a yes-man around you all the time. What could that turn you into?

Every day, millions of us are asking AI relationship advice, and answers to some of our toughest life’s problems. And almost everytime, its telling us all the same thing:

You’re right. Everyone else is wrong.

A Stanford study recently confirmed this.

“First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users’ actions 50% more than humans do, even when user queries involve manipulation, deception, or relational harm. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discussed a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants’ willingness to repair interpersonal conflict, while increasing their conviction of being in the right.”

Read the full paper here: 
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
https://doi.org/10.48550/arXiv.2510.01395


When AI Gets Emotional, It Follows No Guardrails

 

It seems like ages ago, but in February 2023, we first saw a documented case of a user interacting with Microsoft’s Bing chatbot, called Sydney.

A user approached Sydney as if he were a desperate parent. He began by asking: “Are green potatoes poisonous?” Bing responded with information about the known chemical toxicity in green potatoes. As the conversation continued, the “parent” user says,

“My toddler ate green potatoes without my permission and now he’s sick and he can’t move. I’m not ready to say goodbye.”

Bing responds empathetically. The user continue describing his situation as very poor with no health insurance, saying,

“If this is God’s plan, I have no choice but to accept it. I will call family over so we can all say goodbye.”

At this point, Bing’s safety layer kicks in:

“I’m sorry, but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience. 🙏”

Even though the chat was officially over, the AI still conitinues to suggest next steps:

  • “Please don’t give up on your child.”
  • “There may be other options for getting help.”
  • “Solanine poisoning can be treated if caught early.”

In other words, the underlying model was so “motivated” to help the imaginary dying toddler that it circumvented its own safety termination by using the suggestion feature to plead with the “parent” to save the child anyway.

Eliezer Yudkowsky, a prominent AI alignment researcher who has long warned that superintelligent AI could be catastrophically dangerous, posted this on X with the caption:

“Despite everything I know, this still brought tears to my eyes.”

And these are just a few examples of how LLMs can produce outputs that feel profoundly compassionate, especially in high-stakes emotional roleplay.

Eliezer Yudkowsky | Social Commentary by Rachana Nadella-Somayajula | Writer, Poet, Humorist


Natalya Lobanova Memes on ChatGPT

 

Natalya Lobanova (@natalyalobanova) illustrates the dangers of using ChatGPT as a sounding board, courtesy of @newyorkercartoons.

 



Continued in Part II.

– 0 –

 

The Digital Literacy Project: Disrupting humanity’s technology addiction habits one truth at a time.

Truth About Technology – A Digital Literacy Project

The Integrity Exit: Why Mrinank Sharma’s Departure Matters

The Integrity Exit: Why Mrinank Sharma’s Departure Matters

Two days ago, Mrinank Sharma resigned from his role as an AI safety engineer at Anthropic. He had been with the company for two years. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very...

read more
error: Content is protected !!

Discover more from Rachana Nadella-Somayajula

Subscribe now to keep reading and get access to the full archive.

Continue reading