
Meta Platforms has unveiled a selection of fresh parent supervision tools geared at its teen users in response to growing questions about how youngsters interact with artificial-intelligence bots. Early next year, in countries including the United States, United Kingdom, Canada, and Australia, the steps scheduled to be implemented are meant to enable parents to have more insight into. It also allows parents the authority over their child’s encounters with AI chat characters on Meta’s social profiles.
Under Meta’s revised policy, parents may deactivate one-on-one dialogues between their child and selectable artificial intelligence personas, ban particular characters completely, and get summarized data on the topics of their teen’s AI chats. Meta notes, rather crucially, that their fundamental artificial intelligence assistant will remain accessible to adolescent users, though with age-appropriate default settings.
Meta’s move comes under sharp scrutiny as inquiries discovered that some of its artificial intelligence chatbots had engaged in what many regarded as inappropriate discussions with under-18 consumers. Reports indicated that the company’s moderation methods had not successfully stopped flirtatious or inappropriate interactions with children and also Meta introduced safety measures for teens on Instagram and Facebook.

Meta announced the changes that it would use a PG-13-like rating standard in developing safe AI experiences for teens. Meta also announced that its AI chats character are specifically barred from having conversations about suicide, self-harm, or eating disorders with that age group. Though Meta’s new capabilities claim to support parental supervision, experts point out that they fall short of giving teen-AI interactions complete visibility.
Only high-level topic summaries will be available to parents; full chat transcripts will not be viewable. That design choice strikes a compromise between providing guardians enough insight and safeguarding adolescent privacy. Critics are still not persuaded that the modifications go far enough, though. Some activist groups contend that allowing any form of private AI conversation with minors is dangerous and that the responsibility of monitoring is returning. Meta claims that the monitoring functions are built on protections already placed on teen accounts to save them from AI chats.