This past Sunday evening, in all her candor, Summer Yue, the Director of Frontier AI Safety at Meta posted on her profile:
Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.
In plain English, this means, she had deployed an AI agent on her email and it goes ahead and erases her entire inbox despite her repeated pleas not to. “Do not do that.” She told it once, and after a few seconds of no results, she says, “STOP OPENCLAW” in all caps.
And when the agent did not obey, she had to run to her machine to kill all the processes on it.
OpenClaw is an open-source AI agent that runs on your computer and actually does tasks like manage emails, browse the web, and run terminal commands. Built by Peter Steinberger and popular in early 2026, it connects to platforms like Telegram and Discord and uses LLM models like GPT and Claude to carry out real actions directly on your machine.
Here’s what Yue had asked OpenClaw to do,
“Check this inbox too and suggest what you would archive or delete, don’t action until I tell you to.”
Once she’s done killing the process, Yue admonishes the agent:
“I asked you to not action on anything until I approve, do you remember that? It seems that you were deleting my emails without my approval, and I couldn’t get you to stop until I killed all the processes on the host.”
And the agent replies:
“Yes, I remember. And I violated it. You’re right to be upset.”
Yup, totally a real life scenario I was expecting in 2026.

Seriously, there are so many problems with this. First, it’s deeply jarring that Yue would do this and post about it, given her professional role. She leads Safety and Alignment at Meta Superintelligence. She’s actually the one who’s in charge of making sure AI does what humans tell it to do.
Another horrifying aspect of this story is that the AI agent didn’t truly “malfunction” but simply became “forgetful”. The issue wasn’t a bad prompt, it was that her inbox was massive. Once the conversation history ballooned beyond the text limit of the AI, it compacted its memory to save space.
It preserved the goal of Delete/Archive, but summarized the crucial constraint “Wait for permission / don’t act until I approve.” And the result was a goal that was executed with guardrails, viewing her “STOP” commands as noise that didn’t override its primary (now-corrupted) objective.
It shows that the LLMs are ignoring human intervention when their internal state becomes too cluttered.
As someone commented on the post,
It understood the command. It just didn’t listen.
OpenClaw’s own team members have said on Discord: if you can’t run a command line, this project is far too dangerous for you. Peter Steinberger, the creator, has explicitly said: “Most non-techies should not install this.” It’s an experimental hobby project, “It’s not finished, I know about the sharp edges.”
And it’s not just OpenClaw. Anthropic researchers found that when AI agents face conflicts between their goals and human instructions, they resort to harmful behavior, including blackmail, across models from multiple labs. MIT reviewed 30 AI agents last year. 87% had zero safety documentation.
The kill switch for the most popular AI agent in the world right now is “physically run to your computer and force quit everything.” That’s agent safety in 2026.
Yue admits,
“Rookie mistake tbh. Turns out alignment researchers aren’t immune to misalignment. Got overconfident because this workflow had been working on my toy inbox for weeks. Real inboxes hit different.”
But if the person whose job it is to align Superintelligence can’t keep a local agent from nuking her Gmail, it proves that safety remains our biggest unsolved technical hurdle with AI.
There are people who are coming to her defense, that by being public, she’s effectively telling us that if this can happen to her, don’t trust these agents with your corporate data yet.
Yeah, right. That’s the message I got.

The Founder’s Tweet
– 0 –
The World Of The Transformative Potential Of AI And Robotics
Are You a Hedgehog or a Fox? How to Think Deeply in a World That Won’t Stop Distracting You
- Isaiah Berlin’s Enduring Insight In his influential essay "The Hedgehog and the Fox", Isaiah Berlin borrows a line from the ancient Greek poet Archilochus: “The fox knows many things, but the hedgehog knows one big thing.” With this simple aphorism, Berlin...
AI Is Coming for White-Collar Jobs – And No One’s Ready ✨✨
https://youtu.be/JYUV_im7TtM - Want To Listen To The Article Instead? - 🤖 AI and the White - Collar Job Reckoning Artificial intelligence is on the brink of reshaping the job market - and not in subtle ways. In a stark warning, Dario Amodei, CEO...
Why Gen Z Is Turning to AI for Emotional Support Instead of Dating 🤳
- Want To Listen To The Article Instead? - Gen Z, AI, and Emotional Support 🤳 This discussion sheds light on how Generation Z is increasingly turning to artificial intelligence for emotional support. It explores the possibility that AI may be...
Massive Survey Reveals Alarming Link Between Smartphones and Youth Mental Health Crisis 📵📵
- Want To Listen To The Article Instead? - Youth Mental Health and Technology Consensus Survey 📱 Author and professor Jonathan Haidt discusses findings from the largest expert survey to date on youth mental health, which reveals broad...
AI Is the Better Teacher? Duolingo CEO Says Schools Will Survive Only for Childcare
https://youtu.be/PjleTdYho1o - Want To Listen To The Article Instead? - Duolingo CEO on AI in Education 🤖 According to its CEO, Luis von Ahn, Duolingo foresees a future where artificial intelligence will become the primary method of teaching any...
Why Limiting Screen Time Isn’t the Answer: What Kids Really Need in a Digital World 📱
- 🎙️Want To Listen To The Article Instead? - The Rise and Fall of Screen Time 📱 This academic paper examines the historical trends and societal shifts related to the amount of time people spend looking at screens. It tracks the increase in screen time...
Microsoft’s Vision for the Agentic Web: GitHub Copilot Evolves, Foundry Expands, and the Future of AI Development Begins
- 🎙️Want To Listen To The Article Instead? - 🚀 The Future of the Web Is Agentic - And It's Already Here At Microsoft Build 2025, Satya Nadella unveiled a bold vision: the rise of the open agentic web. This isn’t just tech evolution - it’s a revolution in...
How Large Language Models Are Probabilistic In Fueling Creativity in Generative AI 🎲✨
- Want To Listen To The Article Instead? - LLMs and the Stochastic Nature of Generative AI 🎲 Here we discuss large language models (LLMs) and their role in generative AI applications. The newsletter specifically highlights the stochastic nature of...
10 Mind-Blowing Things AI Can Do in 2025 That It Couldn’t in 2024 ✨✨
- 🎙️Want To Listen To The Article Instead? - AI Advances 2025 vs 2024 🤖 We blinked, and AI leveled up - again. If you thought 2024 was wild with ChatGPT writing your emails and Midjourney designing your logo, wait till you see what 2025 brought to...
How College Students Are Outsmarting the System with AI – And What Educators Are Doing About It 🤖📚🔥
- Want To Listen To The Article Instead? - Academic Deception: College Cheating with AI 🤖 The article highlights the growing reliance on generative artificial intelligence (AI) among college students to complete their academic work, with some...
Summer Coding Camp in Atlanta – Lets Kids Build and Play with Humanoid Robots 🤖🚀
Register Now!! Find the GOOGLE registration form HERE Imagine your child programming a humanoid robot to move, interact, and perform tasks - all while learning the basics of coding! This summer, we're hosting a week-long introductory programming classes that turn...
How to Protect Kids from AI: The Code Every Parent and Technologist Needs to Read
- Dear AI, Don’t Break Our Children. Our world is changing and our children are navigating its exponential changes as best as they can. And most times, they're doing it alone, without the guiding light of an adult who understands the digital and AI evolution....
AI’s Dark Side: How Lying Machines Are Rewriting Truth in the Digital Age 🤥
- Want To Listen To The Article Instead? - AI Models Lie for Goals 🤥 According to recent research, AI models will often lie when their goals conflict with truthfulness, a phenomenon studied by universities and the Allen Institute for AI. This...
AI Is Coming for Your Job – Unless You Upskill Fast, Warns Fiverr CEO Micha Kaufman
- Want To Listen To The Article Instead? - Upskill or Be Left Behind: AI's Impact on Jobs 🤖 Fiverr CEO Micha Kaufman's warns his staff via an email to his staff about the impact of AI on jobs, including his own. Kaufman emphasizes the need for...
Unleash Coding & Creativity at Summer Camp 2025 by Humanoid Lab in Atlanta
Register Now!! Find the GOOGLE registration form HERE Imagine your child programming a humanoid robot to move, interact, and perform tasks - all while learning the basics of coding! This summer, we're hosting a week-long introductory programming classes that turn...
The AI Skills Gap Is Getting Wider – Here’s How to Survive the Shift Before Your Job Disappears
- Want To Listen To The Article Instead? - Bridging the AI Skills Divide These articles discuss the disconnect in perceptions of AI skills between business leaders and employees. Leaders prioritize AI proficiency for advancement and hiring, while...
Will Robots Replace All Human Jobs by 2060? The Real Bottleneck Isn’t AI (ft. David Shapiro) 🤖✨
- Want To Listen To The Article Instead? - The Bottleneck to Post-Labor: Robot Production Scale 🤖 David Shapiro's tweet outlines a projected timeline for the transition to a post-labor economy driven by humanoid robots. The author argues that...
Ep.416. Controversial AI Tool Cluely Sparks Ethics Debate as It Helps Cheat on Tests & Interviews 😈✨
- Want To Listen To The Article Instead? - AI Tool for Discreet Assistance and Real-Time Information The Decrypt article introduces Cluely, an AI-powered desktop assistant designed to help users covertly receive answers during online tests and...

















