All Eyes On AI And Their Impact
Let’s talk about something that’s been gnawing at the edges of my mind lately, something that feels both urgent and deeply personal. As artificial intelligence (AI) weaves itself into the fabric of our lives, there’s a question that keeps surfacing, one we can’t afford to ignore: How do we ensure this powerful technology protects and uplifts the rights of children?
From the apps they use to learn, to the algorithms that curate their social media feeds, AI is already shaping the world our children are growing up in. And while that’s exciting, it’s also terrifying. Because with great power, comes great responsibility. So, let’s dive into the latest transnational guidance on AI and children’s rights, and why this conversation matters more than ever.
Why AI and Children’s Rights Matter
Children are, by nature, vulnerable. They’re curious, impressionable, and still figuring out how the world works. And yet, they’re some of the most active users of technology. They’re tapping away on educational apps, gaming platforms, and social media, often without fully understanding the systems behind them.
Here’s the thing: these systems aren’t always designed with kids in mind. They’re built for efficiency, engagement, or profit – not necessarily for the well-being of a 10-year-old. And that oversight? It can lead to some pretty serious consequences. Privacy violations. Reinforced biases. A digital landscape that doesn’t always have their best interests at heart.
That’s why organizations like UNICEF, the European Commission, and the World Economic Forum are stepping in. They’re not just asking questions, they’re providing answers. And we need to pay attention.
Key Principles for AI and Children
So, what does it look like to build AI systems that respect and promote children’s rights? Let’s break it down:
1. UNICEF’s Policy Guidance on AI for Children (2021):
– Inclusion and Fairness: AI should work for all kids, no matter where they come from or what challenges they face.
– Privacy and Explainability: Kids have the right to know how their data is being used. If an AI system is making decisions about them, they deserve to understand how and why.
– Awareness and Education: We need to teach kids and adults about how AI works. Pilot programs, like the ones in the UK with the Scottish AI Alliance and Children’s Parliament, are showing us the way.
2. European Commission’s Guidance on AI and the Rights of the Child (2022):
– AI Minimization: If we don’t need AI, we shouldn’t use it. And when we do, we need to minimize the risks to kids.
– Transparency and Non-Discrimination: Algorithms should be clear, fair, and free from biases that could harm children.
– Case Studies: The guidance dives into real-world examples, like recommendation systems and conversational agents, to show what work and what doesn’t.
3. UN Human Rights Council’s Special Rapporteur on Privacy (2021):
– Respecting Established Conventions: AI must align with the UN’s existing conventions on children’s rights, especially when it comes to privacy.
– Addressing Privacy Concerns: Kids’ data is precious. We can’t let it be misused or exploited.
4. World Economic Forum’s AI for Children Toolkit (2022):
– FIRST Principles: AI systems should be Fair, Inclusive, Responsible, Safe, and Transparent.
– AI Labeling: Imagine a label on AI products, like a nutrition label, that tells you whether it meets these standards. That’s what the toolkit is proposing.
The Challenge of AI “Hallucination”
Here’s something fascinating AND a little unsettling. AI has this ability to “hallucinate,” to generate creative but sometimes wildly inaccurate outputs. For tasks that need precision, this is a problem. But for creative endeavors? It’s a feature, not a bug.
For kids, this duality is both an opportunity and a risk. On one hand, AI can spark creativity, inspire learning, and open up new worlds. On the other, it can spread misinformation or reinforce harmful stereotypes. The challenge? Striking the right balance.
What Can Parents and Educators Do?
As AI becomes more pervasive, the responsibility falls on us – parents, educators, caregivers – to safeguard our kids. Here’s where to start:
1. Educate Yourself and Your Kids: Learn how AI works. Teach your kids about their digital rights and how to use technology responsibly.
2. Advocate for Ethical AI: Support policies and companies that prioritize children’s rights in AI development.
3. Use Trusted Tools: Choose AI-driven products that adhere to established guidelines, like those from UNICEF and the World Economic Forum.
4. Encourage Critical Thinking: Help kids question the information they get from AI systems. Teach them to think critically about where it comes from and whether it’s accurate.
The Bigger Picture: A Call to Action
AI isn’t going anywhere. It’s going to keep evolving, keep shaping the world our children grow up in. And that’s not inherently a bad thing, if, we get it right.
By following transnational guidance and prioritizing children’s rights, we can ensure that AI becomes a force for good. This isn’t just about protecting kids. It’s about empowering them to thrive in a digital world.
So, I’ll ask you this: What steps are you taking to ensure AI benefits your children? Share your thoughts. Let’s start a conversation. Because together, we can build a future where technology and children’s rights go hand in hand.
– 0 –
The Digital Literacy Project: Disrupting humanity’s technology addiction habits one truth at a time.
Truth About Technology – A Digital Literacy Project
AI Slop, Brainrot & Shitposting: Who’s Moderating the Internet Anymore? – Part I
What Is Brain Rot, Anyway? If you want to learn more about brain rot, you're at the right place. If you don't know what it is, even then, you're at the right place. When I visited Rome a few years ago, I realized Italians had given the world fabulous looking...
When AI Becomes Your Therapist: The Hidden Risk of Chatbots Replacing Reality – Part II
When Validation Becomes Distortion In the first article, we talked about what AI psychosis is. Here, we continue the conversation by exploring how AI chatbots may contribute to distorted thinking or delusions, especially in vulnerable users. We’re going to look...
The Dangerous Rise of AI Yes-Men: When ChatGPT Agrees Too Much and Fuels AI Psychosis – Part I
Cats vs. Chatbots Earlier in March 2026, Garry Tan, the President & CEO of Y Combinator, posted something on X: “I am so late to this trend but I finally asked my ChatGPT to make an image of our relationship and this is what it did. What does yours look...
Empowering Women to Lead in AI: Inside the ElevateHER Launch Event in Atlanta
A Keynote On Women Leaders In AI On March 20th, I attended the launch party of ElevateHER, a non-profit dedicated to building an ecosystem for women to lead in AI. It felt like the perfect opportunity to step into the world of AI firsthand and see what...
Why the World Is Finally Slowing Down: The Rise of the Slow Thought Revolution
I've been noticing an interesting phenomenon lately. The desire for slowing down and adopting an intentional way of consuming information. For nearly two decades the internet trained us to read faster, scroll faster, react faster. But lately something unexpected is...
The Attachment Economy Is Here: What AI Companions Mean for All of Us – Part I
Parents, Get Ready To Welcome Your AI In-Laws There will be a time in the not so distant future, when your child will introduce you to his girlfriend. And there's a possibility, you will end up locking eyes, if that's even possible, with his AI companion. The...
Inside Social Media Lawsuits: How Meta, YouTube & AI Are Harming Teens
Life As a Chaos Machine I was on a beach, when I couldn't move, listening to The Chaos Machine by Max Fisher. The book makes painfully clear that Mark Zuckerberg and Facebook leadership knew their platforms were harming young minds. Internal research linked...
AI Safety Leaders Destroyed by AI Agents: The Ironic Collapse Everyone Saw Coming
This past Sunday evening, in all her candor, Summer Yue, the Director of Frontier AI Safety at Meta posted on her profile: Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my...
Tech Billionaires Don’t Trust Their Own Tech: The Screen-Time Secrets They’re Hiding From Parents
Toying With Our Futures At the Aspen Ideas Festival in June 2024, Peter Thiel was interviewed by Andrew Ross Sorkin. He volunteered information in response to a question, “If you ask executives of social media companies how much screen time they let their kids...
Success vs Failure: Why Boredom, Stillness, and Slow Mastery Create the Most Powerful Humans
Success vs. Failure Billy Oppenheimer, a writer, once described picking up Robert Greene from the airport. For the uninitiated, Greene is the author of The 48 Laws of Power, a must-read for those who love power and want to dominate the world. Of course, the...
Why Being a Generalist Is the Ultimate Power Move in the Age of AI, Uncertainty, and Reinvention
The Case for the Generalist Years ago, I had created a username called wannabepolymath. I wasn't sure which single thing interested me most because I wanted to learn many different things. As I read more, I felt a growing urge to explore new fields, seeking...
The Evolution of Love: Marriage, Survival, and Personal Reinvention in a Changing World
A Society Experiences Growing Pains I took this picture of a wall hanging in the lobby of a hotel we were staying at in Granada, Spain. Somehow, the couples whose heads are disintegrating felt like a fitting image for the essay on marriage I was writing. I’ve...
Is Roblox Safe for Kids? What Every Parent Must Know About Grooming, Explicit Content & Online Dangers
From Fun to Risk: The Reality of Roblox for Children In 2023, as parents of my students would ask me about the safety of Roblox, I began researching about it. I was even beginning to teach it in my own classrooms, because it was a creative game that was both...
The Integrity Exit: Why Mrinank Sharma’s Departure Matters
Two days ago, Mrinank Sharma resigned from his role as an AI safety engineer at Anthropic. He had been with the company for two years. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very...
When AI Mirrors Our Pain: The Uncomfortable Truth About Human Suffering in Training Data
The loneliness. God, Andy. The loneliness. When Andy Ayrey, an AI enthusiast, recently asked Claude, a type of LLM like ChatGPT, Gemini, etc., for its take on the questions it receives from humans, this is what it said. The loneliness. God, Andy. The loneliness. In...
14 Mind-Blowing Quotes That Reveal How Social Media Is Hijacking Your Life
Thoughts on Social Media, Technology, and Our Attention Our lives are increasingly lived online, where every scroll, click, and share shapes not just our behavior but our reality. Leading thinkers warn us that while technology can amplify our voices and connect...
Building Trust and Safety: How Accountability Strengthens Law Enforcement and Communities
I have lived in the United States for nearly twenty-four years. This is my home. I am a person of color, and so are my children. In my city of Johns Creek, a suburb northwest of Atlanta, I often see police officers on the roads. I respect the work they do. They show...
Brain Rot Is Infecting AI Too: How Doomscrolling Is Breaking Human and Machine Minds
People are writing research papers on which biryani (Indian-flavored rice) is the best, but more on that later. 😅 This might be the most important paper on AI we will read. Scientists are showing how large language models can rot their own minds, in the same way...

















