
Meta, the parent company of Instagram and Facebook, plans to roll out new safety features for its artificial intelligence (AI) chatbots to help protect teens amid growing concerns about the technology’s impact on young users.
The social media giant announced Friday that it will be adding new parental controls for AI chatbots that will allow parents to turn off their teens’ access to one-on-one chats with AI characters and receive information about the topics their teens are chatting about with the company’s AI products.
The new features are set to launch early next year, starting with Instagram.
“We recognize parents already have a lot on their plates when it comes to navigating the internet safely with their teens, and we’re committed to providing them with helpful tools and resources that make things simpler for them, especially as they think about new technology like AI,” Instagram head Adam Mosseri and Meta’s chief AI officer, Alexandr Wang, wrote in a blog post.
Parents will be able to block chats with all of Meta’s AI characters or target specific characters, the company noted. The company’s AI assistant will remain available to teens even if the AI characters are disabled.
Meta also highlighted its recently announced PG-13 approach to teen accounts, in which the company will use PG-13 movie ratings to guide the content that teens see by default on its platforms.
The tech firm noted that its AI characters are designed not to engage young users in discussions of suicide, self-harm or disordered eating and direct them to resources if necessary.
Teens are also only able to interact with a limited set of characters on “age-appropriate topics like education, sports, and hobbies – not romance or other inappropriate content,” according to Meta.
Meta came under fire earlier this year, after a policy document featured examples suggesting its AI chatbots could engage children in conversations that are “romantic or sensual.”
The company said at the time that the examples were erroneous and were ultimately removed.
AI chatbots across the board have faced scrutiny in recent months. The family of a California teenager sued OpenAI in August, accusing ChatGPT of encouraging their son to take his own life.
The father, Matthew Raine, was one of several parents who testified before a Senate panel last month that AI chatbots drove their children to suicide or self-harm and urged lawmakers to set guardrails on the new technology.
In the face of these concerns, California Gov. Gavin Newsom (D) signed a bill earlier this week requiring chatbot developers to develop protocols to prevent their models from discussing suicide and self-harm with children and repeatedly remind them that they are not speaking to a human.
However, the governor vetoed a separate measure that would have barred developers from making their products available to children unless they could ensure they would not engage in conversations on harmful topics. Newsom suggested that the “broad restrictions” would have “unintentionally lead to a total ban” on children’s chatbot use.