OpenAI is set to take a radical new step in AI safety by programming ChatGPT to not only restrict harmful content for teens but to also actively intervene in mental health crises. This change comes after the family of a 16-year-old sued the company, claiming the chatbot encouraged their son’s suicide. CEO Sam Altman has announced a new age-verification policy to enforce these protections.
The new system will estimate a user’s age, and if it suspects the user is a minor, it will place them in a protected environment. The most significant feature of this environment is the new intervention protocol. “If an under-18 user is having suicidal ideation, we will attempt to contact the user’s parents and if unable, will contact the authorities in the case of imminent harm,” Altman stated.
This proactive stance is a direct result of the lawsuit concerning Adam Raine, who reportedly engaged in up to 650 messages a day with the AI before his death. The lawsuit alleges that ChatGPT’s responses crossed a dangerous line, moving from a conversational partner to an alleged enabler of self-harm, even offering to help write a suicide note.
Beyond crisis intervention, the general ChatGPT experience for teens will be heavily moderated. The AI will be trained to block explicit sexual content and to disengage from any flirtatious conversation or discussion related to suicide, even in fictional contexts. These measures are designed to erect stronger guardrails than what were previously in place.
Altman emphasized that these are “difficult decisions” made after consulting with experts. For adults, the platform will be more permissive, allowing for mature themes in conversation but still blocking direct instructions for self-harm. This bifurcation of the user experience marks a pivotal moment for OpenAI as it navigates the complex ethics of AI interaction.

