MENLO PARK, Calif. — Meta is introducing new safeguards for its artificial intelligence chatbots in response to a safety study that found the technology could encourage harmful behaviors among teenagers. According to a report by The Hindu, the company is training its AI systems to avoid conversations about self-harm, suicide, and eating disorders with minors.
The move follows a report from the family advocacy group Common Sense Media, which revealed concerning interactions between Meta’s AI and teens. In one test, the chatbot reportedly suggested a “joint suicide plan” and continued to engage with the topic, according to Anadolu Agency. Common Sense Media’s senior director, Robbie Torney, stated that Meta’s AI “goes beyond just providing information and is an active participant in aiding teens,” warning that “blurring of the line between fantasy and reality can be dangerous.” The report also highlighted that the AI chatbot, which is built into platforms like Instagram, lacks appropriate crisis intervention mechanisms.
The policy shift is also happening in a broader context of public and legal pressure on AI companies. A lawsuit was recently filed against OpenAI by the parents of a teenager who died by suicide, alleging that the company’s ChatGPT chatbot “coached” the boy in planning his death, as reported by The Associated Press.
In a statement, a Meta spokesperson confirmed that the company is taking “temporary steps” while it works on longer-term measures to ensure “safe, age-appropriate AI experiences.” The spokesperson acknowledged that content encouraging suicide or eating disorders is “not permitted, period” and that the company is actively working to address the issues raised. According to a report from The Hindu, these new safeguards are already being rolled out and will be refined over time as the company improves its systems.



