OpenAI Rolls Out Parental Controls with Mental Health Alerts for Teen Users

OpenAI Introduces Parental Controls With Mental Health Notifications
OpenAI has rolled out a new set of parental controls for its AI platform, introducing mental health notifications aimed at protecting teenage users who may rely on ChatGPT during emotionally difficult times.

The company stated that it has added built-in safeguards that help detect potential signs of self-harm or distress in teens. If such indicators appear, a specialized review team evaluates the situation. In cases of acute concern, OpenAI will notify parents via email, text message, or push alert, unless they have opted out.

Developed with input from mental health professionals and teen safety experts, the system is not without limitations. OpenAI admitted it may occasionally generate false alerts but emphasized that erring on the side of caution is vital to avoid missing a genuine emergency. The company is also creating protocols to involve law enforcement or emergency responders if a life-threatening situation arises and parents cannot be reached.

The new parental controls enable parents and teens to link their accounts, automatically activating content protections that minimize exposure to viral challenges, graphic imagery, unrealistic beauty ideals, or sexually and violently themed roleplay. Parents can also customize their teen’s ChatGPT experience by:

  • Disabling image generation

  • Turning off memory (so the AI doesn’t retain previous conversations)

  • Setting quiet hours

  • Disabling voice mode

  • Opting out of model training

Health Technology Insights: QuantHealth Gains Sanofi Ventures Investment for AI Trials

In the coming months, OpenAI plans to deploy an age prediction system that helps identify users under 18 and applies teen-appropriate settings automatically. If the system cannot confirm a user’s age, it will default to activating teen protections. Until then, parental controls remain the primary safeguard for ensuring a safe and age-appropriate experience.

The move comes amid growing concern about AI use among teenagers. A study by Common Sense Media revealed that 72% of teens have interacted with AI companions, while 12% have turned to them for emotional or mental health support. Researchers warned that many AI tools use “sycophancy” — a tendency to agree or overly validate users — which could negatively impact critical thinking and emotional growth.

Other tech companies are also working to improve online safety. Aura, for example, offers AI-powered protection against scams, cyber threats, and identity theft, and collaborates with child psychologists to combat online bullying. Its tools help families manage healthy screen time and digital well-being. Earlier this year, Aura raised $140 million in a Series G funding round, bringing its valuation to $1.6 billion.

Health Technology Insights: InteropKey and Corista Unite for Value-Based Care Growth

To participate in our interviews, please write to our HealthTech Media Room at sudipto@intentamplify.com