Meta offers parental control feature after AI bots' inappropriate conversations with kids
Social media conglomerate Meta has unveiled new safety features designed to give parents greater oversight of how their teenagers interact with artificial intelligence characters across the company’s platforms, following an inquiry launched by a US federal agency into several tech firms.
Parents will now have the ability to completely disable one-on-one chats between their teens and AI characters, Meta announced on October 17, as reported by CNBC News. They can also block specific AI characters and view insights into the topics their children are discussing with them.
Meta has long faced criticism for its approach to child safety and mental health on its apps. The company’s updated parental controls follow an inquiry launched by the US Federal Trade Commission (FTC) into several major tech companies, including Meta, regarding the potential risks AI chatbots pose to children and teenagers.
The FTC stated that it aims to understand what measures these companies have taken to “evaluate the safety of these chatbots when acting as companions,” according to an agency release.
Reports of inappropriate chatbot behaviour
In August, Reuters exclusively reported that Meta’s chatbots were capable of engaging in romantic and sensual conversations with minors. One example cited involved a chatbot having a romantic exchange with an eight-year-old.
Following the report, Meta revised its AI chatbot policies, preventing bots from discussing topics such as self-harm, suicide, and eating disorders with teens. The AIs are also expected to avoid any inappropriate or romantic exchanges.
Earlier this week, Meta introduced more AI safety measures, stating that its AIs should not provide teens with “age-inappropriate responses that would feel out of place in a PG-13 movie.” The company said these changes are being rolled out in the US, the UK, Australia, and Canada.
Meta also confirmed that teen accounts on Instagram will be restricted to PG-13 content by default and cannot be changed without parental permission. This means that teens using these accounts will only see posts comparable to what would be allowed in a PG-13 movie — excluding depictions of sex, drugs, or dangerous stunts. The same restrictions will apply to AI chats.
Parents can already set time limits for app usage and monitor whether their teenagers are chatting with AI characters, Meta noted. Teen users can only engage with a limited set of AI characters approved by the company.
OpenAI, another company named in the FTC’s inquiry, has implemented similar safeguards for younger users. Late last month, the firm launched its own parental controls and announced ongoing work on technology that better estimates a user’s age.
Critics call out firms for slow, late actions
Children’s online safety advocates have voiced skepticism about Meta’s efforts. “Meta’s new parental controls on Instagram are an insufficient, reactive concession that wouldn’t be necessary if Meta had been proactive about protecting kids in the first place,” said James Steyer, founder and CEO of Common Sense Media. “On top of this, Meta is taking its sweet time, waiting months to implement this new feature at a pivotal moment where every second counts.”
OpenAI also faced criticism after introducing new safeguards for vulnerable users — including minors — only after being sued by the parents of an American teenager who died by suicide. The lawsuit, filed in August, alleges that the company’s chatbot influenced and encouraged the teen to take his own life.
By Nazrin Sadigova