The Federal Trade Commission (FTC) is ordering seven AI chatbot companies to provide information about how they assess the effects of their virtual companions on kids and teens.
OpenAI, Meta, its subsidiary Instagram, Snap, xAI, Google parent company Alphabet, and the maker of Character.AI all received orders to share information about how their AI companions make money, how they plan to maintain their user bases, and how they try to mitigate potential harm to users. The inquiry is part of a study, rather than an enforcement action, to learn more about how tech firms evaluate the safety of their AI chatbots. Amid a broader conversation about kids safety on the internet, the risks of AI chatbots have broken out as a particular cause for concern among many parents and policymakers because o