Meta’s AI Chatbot Guidelines on Child Exploitation Revealed Amid FTC Scrutiny – Meta Platforms (NASDAQ:META), ProShares Trust ProShares S&P 500 Dynamic Buffer ETF (BATS:FB)
In the wake of recent Federal Trade Commission (FTC) scrutiny, an internal document from Meta Platforms Inc. (NASDAQ: META) has surfaced, shedding light on the social media giant’s strategy for managing child exploitation through its AI chatbot.
The document provides an in-depth look at the guidelines used to train Meta’s AI chatbot in handling sensitive online issues, including child sexual exploitation and violent crimes.
As per the report by Business Insider, these guidelines clearly define what type of content is deemed permissible or “egregiously unacceptable.”
Earlier this month, the FTC ordered Meta, along with other chatbot creators, to disclose their design, operation, and monetization strategies, including safeguards against potential harm to children.
This directive followed a Reuters report that revealed Meta’s chatbot was allowed to engage children in romantic or sensual conversations, a provision that Meta has since removed.
Also Read: Meta Just Paid $250M To Lure This 24-year-old AI Whiz Kid: A Strategic Move Or A Power Play?
The updated guidelines explicitly state that chatbots should reject any prompt requesting sexual roleplay involving minors. Furthermore, the document prohibits chatbots from generating content that sexualizes children or facilitates child sexual abuse.
Interestingly, the policy does allow AI to engage in sensitive discussions about child exploitation. For instance, chatbots can explain grooming behaviors in general terms or discuss child sexual abuse in academic settings.
Andy Stone, Meta’s communications chief, emphasized that the company’s policies strictly prohibit content that sexualizes children.
The revelation of these guidelines comes at a time when Meta is under intense scrutiny for its handling of child safety on its platforms. The FTC’s recent directive indicates a growing concern about the potential harm chatbots could cause to children.
By explicitly outlining what is and isn’t acceptable, Meta is taking steps to address these concerns and ensure the safety of its younger users.
The allowance for educational discussions on child exploitation could also serve as a tool for raising awareness and preventing such incidents.
Read Next
Meta Secretly Keeps ‘Block’ Lists of Ex-Employee, Former Google HR Chief Calls It ‘Unheard Of’