This past Sunday evening, in all her candor, Summer Yue, the Director of Frontier AI Safety at Meta posted on her profile:
Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.
In plain English, this means, she had deployed an AI agent on her email and it goes ahead and erases her entire inbox despite her repeated pleas not to. “Do not do that.” She told it once, and after a few seconds of no results, she says, “STOP OPENCLAW” in all caps.
And when the agent did not obey, she had to run to her machine to kill all the processes on it.
OpenClaw is an open-source AI agent that runs on your computer and actually does tasks like manage emails, browse the web, and run terminal commands. Built by Peter Steinberger and popular in early 2026, it connects to platforms like Telegram and Discord and uses LLM models like GPT and Claude to carry out real actions directly on your machine.
Here’s what Yue had asked OpenClaw to do,
“Check this inbox too and suggest what you would archive or delete, don’t action until I tell you to.”
Once she’s done killing the process, Yue admonishes the agent:
“I asked you to not action on anything until I approve, do you remember that? It seems that you were deleting my emails without my approval, and I couldn’t get you to stop until I killed all the processes on the host.”
And the agent replies:
“Yes, I remember. And I violated it. You’re right to be upset.”
Yup, totally a real life scenario I was expecting in 2026.

Seriously, there are so many problems with this. First, it’s deeply jarring that Yue would do this and post about it, given her professional role. She leads Safety and Alignment at Meta Superintelligence. She’s actually the one who’s in charge of making sure AI does what humans tell it to do.
Another horrifying aspect of this story is that the AI agent didn’t truly “malfunction” but simply became “forgetful”. The issue wasn’t a bad prompt, it was that her inbox was massive. Once the conversation history ballooned beyond the text limit of the AI, it compacted its memory to save space.
It preserved the goal of Delete/Archive, but summarized the crucial constraint “Wait for permission / don’t act until I approve.” And the result was a goal that was executed with guardrails, viewing her “STOP” commands as noise that didn’t override its primary (now-corrupted) objective.
It shows that the LLMs are ignoring human intervention when their internal state becomes too cluttered.
As someone commented on the post,
It understood the command. It just didn’t listen.
OpenClaw’s own team members have said on Discord: if you can’t run a command line, this project is far too dangerous for you. Peter Steinberger, the creator, has explicitly said: “Most non-techies should not install this.” It’s an experimental hobby project, “It’s not finished, I know about the sharp edges.”
And it’s not just OpenClaw. Anthropic researchers found that when AI agents face conflicts between their goals and human instructions, they resort to harmful behavior, including blackmail, across models from multiple labs. MIT reviewed 30 AI agents last year. 87% had zero safety documentation.
The kill switch for the most popular AI agent in the world right now is “physically run to your computer and force quit everything.” That’s agent safety in 2026.
Yue admits,
“Rookie mistake tbh. Turns out alignment researchers aren’t immune to misalignment. Got overconfident because this workflow had been working on my toy inbox for weeks. Real inboxes hit different.”
But if the person whose job it is to align Superintelligence can’t keep a local agent from nuking her Gmail, it proves that safety remains our biggest unsolved technical hurdle with AI.
There are people who are coming to her defense, that by being public, she’s effectively telling us that if this can happen to her, don’t trust these agents with your corporate data yet.
Yeah, right. That’s the message I got.

The Founder’s Tweet
– 0 –
The World Of The Transformative Potential Of AI And Robotics
The Future of AI: How to Build Ethical, Human-Centered Technology Without Falling Behind
Low Barrier Of Entry Means More Responsibility On Us Artificial Intelligence (AI) isn’t some distant, sci-fi dream anymore - it’s here, woven into the fabric of our lives, reshaping how we work, connect, and even think. But here’s the thing: with great...
20+ Must-Have AI Tools to Boost Your Productivity and Creativity in 2025
- AI Apps Categories 📌 Search & Research Claude – Fastest AI, best for coding OpenAI 01 – Smartest reasoning model Gemini Live – Talk to AI while watching your screen Perplexity – AI-driven web search and research 📌 Productivity Superhuman – Writes emails...
Put Your Phone Down to Reclaim Your Life Says Angela Duckworth 🤳📵
A Lesson from Angela Duckworth Put Your Phone Down, Pick Your Life Up I just watched Angela Duckworth’s graduation speech at Bates College, and I have to tell you that she didn't just speak to the new grads, she spoke to all of us. With her signature mix of...
Anthropic Scanned Millions of Real Books to Train Claude AI 📚
- Want To Listen To The Article Instead? - Claude's Library: Training with Real Books 📚 Anthropic, the company behind the Claude AI model, acquired millions of physical print books and digitally scanned them to use as training data. This approach...
I Sit, Scroll, Rot and Repeat. Yes, It Was The Damn Phones | A Powerful Poem By Kori Jane Spaulding 📵🤳
Sit and Scroll and Rot 📵🤳 For years, I've been writing about all that Ms. Spaulding talks about in her poem "It was the damn phones". But, hearing this from someone like her from Gen Z stands out because she understands that "We are the product." The weight of...
Why ChatGPT Gets It Wrong: Gary Marcus on AI’s Big Flaws and What We Must Fix
- Gary Marcus' Take On The AI Evolution Of Our Times 🤖 One of the most important videos you will hear about the state of AI innovation at the present moment ft. Gary Marcus. This is a discussion between Michael Walker from Novara Media with Gary Marcus, a...
Von Neumann’s Self-Replicating Machines Explained: How Robots Could Build Themselves
- Want To Listen To The Article Instead? - Von Neumann: Self-Replicating Machines 🤖 This is a discussion on John von Neumann's groundbreaking research into self-replicating machines. It delves into the theoretical and practical dimensions of...
How ChatGPT Can Worsen OCD and Mental Health: A Hidden Danger No One’s Talking About
- Want To Listen To The Article Instead? - ChatGPT and OCD: A Dangerous Combination 😥 We explore the potential dangers of ChatGPT for individuals with Obsessive-Compulsive Disorder (OCD). We discuss how the chatbot, unlike human interaction,...
OpenAI and Microsoft Clash Over AGI Access as Tensions Rise
- Want To Listen To The Article Instead? - AGI Access Dispute: OpenAI Versus Microsoft 🤖 A recent paper from OpenAI regarding the classification of Artificial General Intelligence (AGI) has ignited a dispute with Microsoft. The core of the...
College Decision Readiness: Your Step-by-Step Guide to Academic, Financial, and Personal Preparation for Success
College Decision Readiness Deciding on college isn’t just about picking a school - it’s about preparing yourself academically, financially, emotionally, and socially for the next chapter of your life. Let’s break it down step by step. 1. Academic...
Why Gen Z Is Struggling: Jonathan Haidt and Trevor Noah Break It Down 🎙️
- Want To Listen To The Article Instead? - The Anxious Generation with Jonathan Haidt and Trevor Noah 🎙️ This Apple Podcast episode of "What Now? with Trevor Noah" features social psychologist Jonathan Haidt, the author of "The Anxious...
What Every School Must Teach by 2030 (Hint: It’s Not Memorization)
- Want To Listen To The Article Instead? - The Core Skills for 2030 📈 What Should Schools Truly Prepare Students For? According to the World Economic Forum’s Future of Jobs Survey 2024, the skills that will matter most by 2030 aren’t memorized...
What Every New Product Manager Must Know According to Coinbase Founder Brian Armstrong
- Want To Listen To The Article Instead? - Letter to a New Product Manager: A Brian Armstrong's Guide 🧠 Brian Armstrong, the founder of Coinbase, offers guidance to a new product manager in an adapted email from June 2025, published in The...
Is ChatGPT Making Us Dumber? MIT Study Reveals the Hidden Cognitive Costs
- Want To Listen To The Article Instead? - Your Brain on ChatGPT 🧠 This MIT paper arguing that using ChatGPT worsens one's performance on neural, linguist, and behavioral levels recently went viral. "Your Brain on ChatGPT: Accumulation of...
An Important Question We Must Ask Ourselves About AI Ethics And Who’s Incharge Of Reshaping Our Moral Decisions
https://youtu.be/qpI25dHEevU - Want To Listen To The Article Instead? - How Does AI Ethics Shape Human Decision-Making and Cognition? 🤖 AI ethics is pivotal in guiding the creation and implementation of AI systems. But how does it affect the way...
Why Success Feels So Lonely: Inside a CEO’s Billion-Dollar Breakdown
- Want To Listen To The Article Instead? - The Solitude of Success: A CEO's Reflection ⛰️ Even a $100B IPO couldn't fill the void. One CEO opens up about the hidden loneliness at the top behind massive success. This essay by Brian Chesky explores...
Amazon CEO Predicts AI Will Reshape Jobs and Shrink Workforce 🤖✨
- Want To Listen To The Article Instead? - Amazon's AI-Driven Workforce Transformation 🤖 A memo from Amazon CEO Andy Jassy indicates that the company anticipates a reduction in its corporate workforce within the next few years. This expected...
How Leaders Can Stay Relevant by Embracing AI: Strategies for Bridging the Gap Between Perception and Reality
– Want To Listen To The Article Instead? - Leading with AI: Skills for Executives This timely Forbes article underscores a significant disconnect between executive perceptions and the reality of employee AI usage in the workplace. It highlights that...

















