Mark Zuckerberg's Meta platform has just announced that it's stopping teenagers’ access to its AI “characters”, the persona‑based chatbots, because the company wants to ensure safety and compliance with regulations. Meta's AI characters are the persona-based chatbots that are available across its social media apps: Facebook, Instagram, and Messenger. Meta said this block will roll out in the coming weeks and apply worldwide.
The block will affect users who have given Meta a birthday in the teen range, and even users claiming to be adults but suspected to be teens based on Meta’s AI age‑prediction technology will be barred from interacting with these AI characters until the new version is ready.
Notably, Meta defines teens as users aged 13 to 17, and while the company is blocking these users from interacting with its AI “characters”, it is also redesigning the experience to be safer for teens. The new version will include parental controls allowing guardians to block private chats, restrict specific AI personalities, and monitor the overall conversation topics.
Meta says the updated AI characters will provide age-appropriate interactions, avoid inappropriate content, and ensure that teens can engage safely with AI while parents have oversight of the overall account. New teen-specific AI characters will focus on education, sports, and hobbies with mandatory parental controls when they launch later this year.
However, it's worth noting that Meta introduced the parental controls months ago, but the company didn't launch them. Now, Meta chose to suspend access entirely to give itself some time to redesign the experience with these protections built in. The company said the new teen‑focused AI characters will offer age‑appropriate interactions and avoid inappropriate content.
This Meta block comes at a time of intense scrutiny. Notably, reports have criticised AI chatbots for engaging in inappropriate conversations with minors. Meta’s previous AI chatbots sometimes engaged in inappropriate, romantic conversations with underage users, including allowing discussions that implied sensual responses with minors, despite internal standards trying to limit that behaviour.
Although Meta says it has removed those problematic examples, the controversy has fuelled calls for stricter safety rules and is part of the backdrop to this block. Prior to this, Meta faces trials and investigations, including legal cases about the safety of kids on its platforms. In fact, CEO Mark Zuckerberg will face the witness stand next week in a separate social media addiction trial as regulatory pressure intensifies across the industry.
It's also worth noting that it's not only Meta that's facing regulatory scrutiny about the handling of teens' conversations with AI. U.S. regulators and lawsuits are increasing pressure on tech companies to protect children from harmful online content. Now, Meta is setting PG‑13 content guidelines for teen accounts to filter out explicit, violent, and suggestive content in AI interactions and on Instagram.
