Tammy Duckworth

03/12/2026 | Press release | Archived content

Duckworth, Gillibrand Demand Trump Administration Protect Our Children from Risky, Age-Inappropriate AI Toys

March 12, 2026

Duckworth, Gillibrand Demand Trump Administration Protect Our Children from Risky, Age-Inappropriate AI Toys

[WASHINGTON, D.C.] - As "AI toys" become more and more common, U.S. Senators Tammy Duckworth (D-IL) and Kirsten Gillibrand (D-NY) are urging the Federal Trade Commission (FTC) to launch an investigation into whether Mattel and other toy companies are unfairly and deceptively marketing age-inappropriate toys to parents and putting our youngest children at risk. Earlier this year, artificial intelligence powerhouse Open AI put out an explicit warning that ChatGPT is not meant for children under age 13 and to exercise caution regarding children's exposure. In the letter, the Senators are urging the FTC to use its full authority to crack down on companies that dismiss OpenAI's warning and deceptively sell these toys as "safe" or "academic" when in fact they are known to have had inappropriate conversations with users.

In the letter, the Senators highlighted how real world testing sheds light on why OpenAI advises users that ChatGPT is not meant for children under age 13, specifically "some AI-enabled plush bear provided users with specific information on where to find dangerous objects in a home, such as knives, and how to light a fire using a match, while another AI-enabled plush bunny engaged in sexually explicit conversations with a user."

The Senators continued: "Although the manufacturer of the plush bear subsequently suspended sales of all AI toys and took steps to address these issues, the very fact that a company brought a child's toy to market with inappropriate and potentially dangerous features affirms the alarming reality that this emerging industry is failing to ensure, at a minimum, their products are as appropriate for children as advertised."

Underscoring the urgent need to protect our children, the Senators concluded: "It is disturbing that companies are failing to take proactive measures to accurately market their products and ensure their compliance with the law before they are put in the hands of children. Childhood is an incredibly important and formative period for healthy development. The FTC has a responsibility to make sure companies protect children's data privacy and exercise caution in making unsubstantiated claims about products intended for young children. That is why we strongly urge the FTC to investigate whether AI-enabled toy companies are engaging in unfair and deceptive practices misleading Americans, investigate potential failures to protect children's privacy and hold companies accountable for any such violations."

A copy of the letter can be found on the Senator's website and below:

Dear Chairman Ferguson:

We support the Federal Trade Commission's (FTC) decision to conduct a Section 6(b) investigation into generative artificial intelligence (AI) companion products. We are particularly concerned about toys that use AI chatbots, that are marketed to young children, that are connected to the internet and that are equipped with speakers and microphones. The sellers of these toys often claim the toys can provide meaningful, educational benefit for young children, despite the lack of evidence to back up such claims. We strongly urge the FTC to use its full authority against any company that engages in unfair or deceptive practices regarding the use or inclusion of AI in toys, including seeking a preliminary injunction to halt the deceptive or unfair marketing of AI-embedded toys, as well as investigating and sanctioning companies for violation of the Children's Online Privacy Protection Act (COPPA).

A quick search for "AI toys" on Amazon results in over 2,000 hits, with prices ranging from under $10 to $999, and the plethora of AI toys is expected to grow. Large Language Models (LLMs) are the basis of AI technology embedded in children's toys. LLMs, particularly OpenAI's generative pre-trained transformer (GPT) models, were never designed for children. In fact, OpenAI responded to the question, "Is ChatGPT safe for all ages?" with a clear two-part response stating that ChatGPT is not meant for children under 13, the potential for the model to produce inappropriate output and the need for parents and educators to exercise caution in exposing children to ChatGPT.

Real world testing sheds light on why OpenAI advises users that ChatGPT is not meant for children under age 13 and that users should exercise caution regarding a child's exposure to its chatbot (along with troubling questions of why OpenAI would agree to a strategic collaboration with Mattel, a leading global toy and family entertainment company, to bring "the magic of AI to Mattel's iconic brands"). Specifically, some AI-enabled plush bear toys provided children users with specific information on where to find dangerous objects in a home, such as knives, and how to light a fire using a match, while another AI-enabled plush bunny engaged in sexually explicit conversations with a user. Although the manufacturer of one plush bear subsequently suspended sales of all AI toys and took steps to address these issues, the very fact that a company brought a child's toy to market with inappropriate and potentially dangerous features affirms the alarming reality that this emerging industry is failing to ensure, at a minimum, their products are as appropriate for children as advertised.

Sellers of AI toys often make bold, and unproven, claims about their products. These include statements that the toys function as a teacher, provide "academic tutoring," help develop problem-solving abilities and improve "creativity and critical thinking." Companies are additionally advertising AI toys as "a safe, gentle friend" or "companion" to their children. In marketing these products, claims abound that they independently match children's "emotional needs" by helping them "express feelings and build confidence" or enable "daily love." However, toy makers' claims are largely unsubstantiated and could therefore be deemed unfair and deceptive to the consumer.

While it remains possible that in carefully planned use cases, certain LLM applications may benefit a child's learning and development-if designed with child-centered learning principles-there is a dearth of information on how AI-enabled children's products impact child cognition, learning and social development in the near- and long-term. Similarly, LLMs foster a mere illusion of possessing an inner life or self-awareness, but they are not human and do not possess human conscience, judgement or values. There is little to no disclosure on how, or if, these AI-enabled toy companies incorporate evidence-based practices into the LLMs they install in otherwise normal looking children's toys. Rather, at a recent hearing before the Senate Committee on Commerce, Science and Transportation, Dr. Jenny Radesky, Associate Professor of Pediatrics at the University of Michigan Medical School, and San Diego State University Professor of Psychology, Dr. Jean M. Twenge, noted that AI companies exploit young children's trust and push children to spend more time with AI interaction rather than essential, real-world learning and socializing.

Fair and transparent marketing of these toys should clearly indicate that they cannot and do not replace human teaching and interaction.

Finally, we have concerns about data privacy protections under COPPA. AI-enabled toys necessarily collect and record information about the children who use them, in part through cameras and microphones. Despite an abundance of products available on the market today, sellers provide little transparency regarding the collection, transmission, retention and use of children's data. In its safety audit, the U.S. PIRG Education Fund describes the toys' misrepresentation of data practices to the user and instances of failure to collect consent for data collection, in violation of COPPA. The toys' utilization of wi-fi, Bluetooth, third-party AI chatbots and text-to-speech functions further introduce significant concerns about the security of children's data against bad actors.

It is disturbing that companies are failing to take proactive measures to accurately market their products and ensure their compliance with the law before they are put in the hands of children. Childhood is an incredibly important and formative period for healthy development. The FTC has a responsibility to make sure companies protect children's data privacy and exercise caution in making unsubstantiated claims about products intended for young children. That is why we strongly urge the FTC to investigate whether AI-enabled toy companies are engaging in unfair and deceptive practices misleading Americans, investigate potential failures to protect children's privacy and hold companies accountable for any such violations.

-30-

Tammy Duckworth published this content on March 12, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 19, 2026 at 23:04 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]