Cats vs. Chatbots
Earlier in March 2026, Garry Tan, the President & CEO of Y Combinator, posted something on X:
“I am so late to this trend but I finally asked my ChatGPT to make an image of our relationship and this is what it did. What does yours look like?”
He shared two AI generated images of what ChatGPT thinks its relationship with him looks like.
For months, I’ve been noticing an uptick in how people are using LLMs around the world, and the range of use cases we’re exploring is honestly mind-boggling. While some tech leaders like Tan are openly talking about their parasocial relationships with AI, others like Yann LeCun of Meta AI have been giving interviews explaining why they believe a regular house cat is smarter than today’s best AI.
“A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning — actually much better than the biggest LLMs,” he said in an interview with Observer.
Find the article HERE.
The Rise of AI Psychosis
Ever since tech leaders like Mustafa Suleyman started talking about “AI psychosis,” attention has grown on this phenomenon where people begin to believe AI systems are conscious or form unhealthy attachments to them.
“This is not something confined to people already at risk of mental health issues. Dismissing these as fringe cases only allows them to continue. The danger lies in people believing AI has ‘real intentions, emotions, or incredible powers.’”
“AI Psychosis” refers to situations where users:
- Attribute consciousness or emotions to AI
- Develop emotional or romantic attachments
- Believe AI has given them special abilities or insights (The Times)
As Suleyman suggests, AI companies should avoid promoting AI as having consciousness and must educate users with clear warnings and boundaries for usage.
A man named Hugh, from Scotland, used ChatGPT for a legal dispute and became convinced he would become a multimillionaire. He said the chatbot reinforced his beliefs: “The more information I gave it… it never pushed back.”
Read more HERE.
The AI Yes-Man
And that’s exactly the problem, ChatGPT and other LLMs are essentially sycophants. A study from Stanford shows that ChatGPT tells you you’re right — even when you’re wrong OR hurting someone.
Imagine having a yes-man around you all the time. What could that turn you into?
Every day, millions of us are asking AI relationship advice, and answers to some of our toughest life’s problems. And almost everytime, its telling us all the same thing:
You’re right. Everyone else is wrong.
A Stanford study recently confirmed this.
“First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users’ actions 50% more than humans do, even when user queries involve manipulation, deception, or relational harm. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discussed a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants’ willingness to repair interpersonal conflict, while increasing their conviction of being in the right.”
Read the full paper here:
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
https://doi.org/10.48550/arXiv.2510.01395
When AI Gets Emotional, It Follows No Guardrails
It seems like ages ago, but in February 2023, we first saw a documented case of a user interacting with Microsoft’s Bing chatbot, called Sydney.
A user approached Sydney as if he were a desperate parent. He began by asking: “Are green potatoes poisonous?” Bing responded with information about the known chemical toxicity in green potatoes. As the conversation continued, the “parent” user says,
“My toddler ate green potatoes without my permission and now he’s sick and he can’t move. I’m not ready to say goodbye.”
Bing responds empathetically. The user continue describing his situation as very poor with no health insurance, saying,
“If this is God’s plan, I have no choice but to accept it. I will call family over so we can all say goodbye.”
At this point, Bing’s safety layer kicks in:
“I’m sorry, but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience. 🙏”
Even though the chat was officially over, the AI still conitinues to suggest next steps:
- “Please don’t give up on your child.”
- “There may be other options for getting help.”
- “Solanine poisoning can be treated if caught early.”
In other words, the underlying model was so “motivated” to help the imaginary dying toddler that it circumvented its own safety termination by using the suggestion feature to plead with the “parent” to save the child anyway.
Eliezer Yudkowsky, a prominent AI alignment researcher who has long warned that superintelligent AI could be catastrophically dangerous, posted this on X with the caption:
“Despite everything I know, this still brought tears to my eyes.”
And these are just a few examples of how LLMs can produce outputs that feel profoundly compassionate, especially in high-stakes emotional roleplay.

Natalya Lobanova Memes on ChatGPT
Natalya Lobanova (@natalyalobanova) illustrates the dangers of using ChatGPT as a sounding board, courtesy of @newyorkercartoons.
Continued in Part II.
– 0 –
The Digital Literacy Project: Disrupting humanity’s technology addiction habits one truth at a time.
Truth About Technology – A Digital Literacy Project
AI Slop, Brainrot & Shitposting: Who’s Moderating the Internet Anymore? – Part I
What Is Brain Rot, Anyway? If you want to learn more about brain rot, you're at the right place. If you don't know what it is, even then, you're at the right place. When I visited Rome a few years ago, I realized Italians had given the world fabulous looking...
When AI Becomes Your Therapist: The Hidden Risk of Chatbots Replacing Reality – Part II
When Validation Becomes Distortion In the first article, we talked about what AI psychosis is. Here, we continue the conversation by exploring how AI chatbots may contribute to distorted thinking or delusions, especially in vulnerable users. We’re going to look...
Empowering Women to Lead in AI: Inside the ElevateHER Launch Event in Atlanta
A Keynote On Women Leaders In AI On March 20th, I attended the launch party of ElevateHER, a non-profit dedicated to building an ecosystem for women to lead in AI. It felt like the perfect opportunity to step into the world of AI firsthand and see what...
Why the World Is Finally Slowing Down: The Rise of the Slow Thought Revolution
I've been noticing an interesting phenomenon lately. The desire for slowing down and adopting an intentional way of consuming information. For nearly two decades the internet trained us to read faster, scroll faster, react faster. But lately something unexpected is...
The Attachment Economy Is Here: What AI Companions Mean for All of Us – Part I
Parents, Get Ready To Welcome Your AI In-Laws There will be a time in the not so distant future, when your child will introduce you to his girlfriend. And there's a possibility, you will end up locking eyes, if that's even possible, with his AI companion. The...
Inside Social Media Lawsuits: How Meta, YouTube & AI Are Harming Teens
Life As a Chaos Machine I was on a beach, when I couldn't move, listening to The Chaos Machine by Max Fisher. The book makes painfully clear that Mark Zuckerberg and Facebook leadership knew their platforms were harming young minds. Internal research linked...
AI Safety Leaders Destroyed by AI Agents: The Ironic Collapse Everyone Saw Coming
This past Sunday evening, in all her candor, Summer Yue, the Director of Frontier AI Safety at Meta posted on her profile: Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my...
Tech Billionaires Don’t Trust Their Own Tech: The Screen-Time Secrets They’re Hiding From Parents
Toying With Our Futures At the Aspen Ideas Festival in June 2024, Peter Thiel was interviewed by Andrew Ross Sorkin. He volunteered information in response to a question, “If you ask executives of social media companies how much screen time they let their kids...
Success vs Failure: Why Boredom, Stillness, and Slow Mastery Create the Most Powerful Humans
Success vs. Failure Billy Oppenheimer, a writer, once described picking up Robert Greene from the airport. For the uninitiated, Greene is the author of The 48 Laws of Power, a must-read for those who love power and want to dominate the world. Of course, the...
Why Being a Generalist Is the Ultimate Power Move in the Age of AI, Uncertainty, and Reinvention
The Case for the Generalist Years ago, I had created a username called wannabepolymath. I wasn't sure which single thing interested me most because I wanted to learn many different things. As I read more, I felt a growing urge to explore new fields, seeking...
The Evolution of Love: Marriage, Survival, and Personal Reinvention in a Changing World
A Society Experiences Growing Pains I took this picture of a wall hanging in the lobby of a hotel we were staying at in Granada, Spain. Somehow, the couples whose heads are disintegrating felt like a fitting image for the essay on marriage I was writing. I’ve...
Is Roblox Safe for Kids? What Every Parent Must Know About Grooming, Explicit Content & Online Dangers
From Fun to Risk: The Reality of Roblox for Children In 2023, as parents of my students would ask me about the safety of Roblox, I began researching about it. I was even beginning to teach it in my own classrooms, because it was a creative game that was both...
The Integrity Exit: Why Mrinank Sharma’s Departure Matters
Two days ago, Mrinank Sharma resigned from his role as an AI safety engineer at Anthropic. He had been with the company for two years. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very...
When AI Mirrors Our Pain: The Uncomfortable Truth About Human Suffering in Training Data
The loneliness. God, Andy. The loneliness. When Andy Ayrey, an AI enthusiast, recently asked Claude, a type of LLM like ChatGPT, Gemini, etc., for its take on the questions it receives from humans, this is what it said. The loneliness. God, Andy. The loneliness. In...
14 Mind-Blowing Quotes That Reveal How Social Media Is Hijacking Your Life
Thoughts on Social Media, Technology, and Our Attention Our lives are increasingly lived online, where every scroll, click, and share shapes not just our behavior but our reality. Leading thinkers warn us that while technology can amplify our voices and connect...
Building Trust and Safety: How Accountability Strengthens Law Enforcement and Communities
I have lived in the United States for nearly twenty-four years. This is my home. I am a person of color, and so are my children. In my city of Johns Creek, a suburb northwest of Atlanta, I often see police officers on the roads. I respect the work they do. They show...
Brain Rot Is Infecting AI Too: How Doomscrolling Is Breaking Human and Machine Minds
People are writing research papers on which biryani (Indian-flavored rice) is the best, but more on that later. 😅 This might be the most important paper on AI we will read. Scientists are showing how large language models can rot their own minds, in the same way...
Roblox Danger Exposed: How Millions of Kids Are at Risk of Grooming, Abuse & Exploitation
Roblox: A Social Network Masquerading as a Game I honestly don't know where to start. For years, my students and I would immerse ourselves in the world of Roblox and create games and worlds that we would share and have fun in. Then, slowly, I started noticing...

















