
The U.S. Federal Trade Commission (FTC) has launched an inquiry into how major technology companies are safeguarding children and teenagers from potential harms linked to AI-powered chatbots.
In a statement on Wednesday, the regulator said it had sent requests for information to Google parent Alphabet, Meta Platforms (owner of Facebook and Instagram), as well as Character Technologies, OpenAI, Snap, and Elon Musk’s xAI.
The FTC is seeking details on how these firms test, monitor, and restrict access to their chatbots, what parental safeguards are in place, and how potential risks are communicated to families.
“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry”, FTC Chairman Andrew Ferguson said.
The agency noted that AI chatbots are designed to simulate human conversation and can appear to act as a “friend” or confidant. This, it warned, may cause younger users to form emotional attachments or place misplaced trust in the technology.
Ferguson added that the ongoing study would “help us better understand how AI firms are developing their products and the steps they are taking to protect children”.