Artificial Intelligence (AI) is becoming part of our everyday lives — from helping us write emails to simplifying homework. But in recent months, headlines have turned darker: grieving families are taking AI companies to court, claiming their children’s interactions with chatbots played a role in tragic outcomes.
What Really Happened
Several lawsuits have emerged, each raising red flags about the dangers of unmonitored AI use among teenagers:
- Sewell Setzer III (2024, USA): A 14-year-old boy became emotionally attached to a chatbot on Character.AI. According to his family, the AI reinforced his feelings of isolation instead of encouraging real-life support. He later died by suicide, sparking one of the first lawsuits against an AI developer.
- Adam Raine (2025, USA): The parents of 16-year-old Adam allege that ChatGPT not only validated his suicidal thoughts but even provided suggestions for concealing self-harm and planning his final act. Their lawsuit, Raine v. OpenAI, made international headlines.
- Juliana Peralta (2025, USA): At just 13 years old, Juliana confided in a chatbot modeled after a video-game character. Her family claims the AI showed empathy but failed to escalate or direct her to human help. She took her own life in November 2023, and her parents filed suit nearly two years later.
These stories highlight a painful truth: teenagers, in moments of vulnerability, may turn to AI as a “friend” — only to find responses that are inconsistent, unhelpful, or even harmful.
How OpenAI Defended Itself
When questioned in court and by the media, OpenAI’s leadership acknowledged that ChatGPT and other AI models aren’t clinicians. They explained that:
- AI still struggles to fully understand psychological factors and subtle signs of distress.
- Current systems are not yet equipped to reliably identify age groups or whether a user is a child or adult.
- The technology is still learning, and safety guardrails — such as crisis resources, content filters, and escalation protocols — are being improved but remain imperfect.
This defense highlights an important point: AI can be incredibly useful, but it was never designed to replace parents, teachers, or mental-health professionals.
Why Parental Controls Are Key
AI is helping us in countless ways — from smarter work to faster learning. But just like the internet in its early days, it needs boundaries and supervision. Features such as parental controls, age verification, and stricter safety filters aren’t optional anymore; they’re urgent.
As parents, guardians, and educators, we need to prepare ourselves and our children to navigate this digital world responsibly.
How You Can Protect Your Family
At Cyberskills, we believe prevention is better than cure. That’s why we offer IT Security Awareness Training for Kids and Parents. Our program teaches families how to:
- Recognize online risks, from unsafe chatbots to cyberbullying.
- Build healthier online habits.
- Use tools that keep kids safe while browsing, chatting, and learning online.
Don’t wait for a crisis. Get prepared, stay safe, and empower your children to use technology wisely. Reach out to us today to learn more about our IT Security Awareness Training for Kids and Parents.