Two days ago, Mrinank Sharma resigned from his role as an AI safety engineer at Anthropic. He had been with the company for two years.

The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment. We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.

 

I want to contribute in a way that feels fully in my integrity, and that allows me to bring to bear more of my particularities. I want to explore the questions that feel truly essential to me — the questions that David Whyte would say ‘have no right to go away,’ the questions that Rilke implores us to ‘live.’ For me, this means leaving.”

This is what he wrote in the resignation letter he posted on X for the world to see. To say this is the most gorgeous resignation letter I’ve ever read is an understatement.

This situation is heartbreaking on so many fronts. How can we lose empathetic people who are doing their best at companies that are literally going to be responsible for humanity crossing this inflection point safely? Human–AI interactions are going to explode in volume, and a strict AI ethical framework is the need of the hour.

It reminds me of what Geoffrey Hinton said during his 2024 Nobel Prize speech. As he accepted one of science’s highest honors, he issued a grim global warning:

There is also a longer-term existential threat that will arise when we create digital beings that are more intelligent than ourselves. We have no idea whether we can stay in control. But we now have evidence that if they are created by companies motivated by short-term profits, our safety will not be the top priority. We urgently need research on how to prevent these new beings from wanting to take control. They are no longer science fiction.

Geoffrey Hinton’s message is one of those truths you can’t un-hear. It marks a turning point, the moment when the excitement of building powerful systems gives way to the realization that power without conscience is not innovation. We’re deep inside a civilizational gamble for humanity.

Sharma’s words show a person who spends time in reflection and reading. He speaks to the struggle of aligning actions with values in high-pressure environments, hinting at internal tensions where “pressures to set aside what matters most” prevail. But here’s the thing: the world has always been in peril. And if all the good people leave, who will help younger generations navigate toward a safer, more ethical future?

And most importantly, if a person leading the AI safeguards at a company doesn’t approve of the ethics being put into place, what does that say about Anthropic and the AI safety ecosystem at large?

Sharma says he’s going to explore writing, community building, and creating a common space for poetic and scientific truths. I wish he had stayed in the system to fight it. We need more people like him guiding us through this inflection point for humanity. But I also wish him well as he continues pursuing personal integrity while acting on his core values.

Corporate greed has long led to compromised values, with ethics being sidelined for a better bottom line. Someone commented on his post, “When the person building the safety guardrails walks away citing moral concerns… that tells you everything.”

And that might be the saddest reality of all.

 


 


 

– 0 –

 

The Future Is Here

The World Of The Transformative Potential Of AI And Robotics

ChatGPT Wrote a Story in my Voice and it is Scary Accurate

ChatGPT Wrote a Story in my Voice and it is Scary Accurate

- I Asked ChatGPT To Write A Story In My Voice   When I heard author David Baldacci speak at the Senate hearing this past week about copyright laws becoming obsolete in the AI age and how we must do more to protect the rights of content creators, I wanted to do a...

read more
error: Content is protected !!

Discover more from Rachana Nadella-Somayajula

Subscribe now to keep reading and get access to the full archive.

Continue reading