12/12/2025 | Press release | Distributed by Public on 12/12/2025 20:28
WASHINGTON - U.S. Senator Brian Schatz (D-Hawai'i), a senior member of the Senate Committee on Commerce, Science, and Transportation, and Katie Britt (R-Ala.) called on leading artificial intelligence (AI) companies to improve transparency around the capabilities of their models and the risks they pose to users. In letters to OpenAI, Microsoft, Google, Anthropic, Meta, Luka, Character.AI, and xAI, the senators highlighted reports of AI chatbots encouraging dangerous behavior among children, including suicidal ideation and self-harm, and requested commitments to timely, consistent disclosures around model releases as well as long-term research into chatbots' impacts on the emotional and psychological wellbeing of users. In addition to Schatz and Britt, the letter was signed by U.S. Senators James Lankford (R-Okla.) and Chris Coons (D-Del.).
"If AI companies struggle to predict and mitigate relatively well-understood risks, it raises concerns about their ability to manage more complex risks," the senators wrote. "While we have been encouraged by the arrival of overdue safety measures for certain chatbots, these must be accompanied by improved public self-reporting so that consumers, families, educators, and policymakers can make informed decisions around appropriate use."
The full text of the letter is below and available here.
Dear Mr. Altman,
We write to request additional information about your company's public reporting practices regarding its artificial intelligence (AI) models. As a market leader for consumer AI chatbots, the steps that you take to increase transparency around your capabilities and risk evaluations have direct impacts on the wellbeing of Americans who use your products. We are already seeing how increasingly powerful models have become integrated into many aspects of personal life for users. As demonstrated harms to users' psychological well-being have emerged, it is critical that, in addition to improving safety design for users, your company takes necessary steps to foster public transparency to promote these goals.
In the past few years, reports have emerged about chatbots that have engaged in suicidal fantasies with children, drafted suicide notes, and provided specific instructions on self-harm. These incidents have exposed how companies can fail to adequately evaluate models for possible use cases and inadequately disclose known risks associated with chatbot use. Additionally, we have seen how companies can struggle to prevent known model risks or unwanted behaviors prior to deployment. If AI companies struggle to predict and mitigate relatively well-understood risks, it raises concerns about their ability to manage more complex risks. While we have been encouraged by the arrival of overdue safety measures for certain chatbots, these must be accompanied by improved public self-reporting so that consumers, families, educators, and policymakers can make informed decisions around appropriate use. More detailed information is still needed for age-restricted models; even chatbots aimed at children with additional safeguards have produced pro-eating disorder, violent, and sexual content.
In addition to impacts on mental health and risks to vulnerable users, accelerating model capabilities necessitate greater transparency around other potential risks involving public safety and national security. In particular, companies have disclosed how advanced models may pose misuse risks in areas including cybersecurity and biosecurity. Many frontier AI companies made voluntary commitments at the Seoul AI Summit, or in support of the G7 Code of Conduct, to provide transparency into their efforts to assess risks to national security. We are supportive of ongoing disclosures for these risks, and request that companies adhere to their prior commitments.
Public disclosure reports, such as AI model and system cards, serve as the closest equivalent to nutrition labels for AI models. While they are essential public transparency tools, today's changed landscape calls for assessing current best practices and how they can be better responsive to user risks. Current public disclosure practices can be inconsistent or insufficient, may not be released alongside product launches, and can lack standardization. The distinction between major and minor releases is left to the discretion of developers, sometimes without explanation. Model and system cards may also fail to incorporate new or updated information about existing models while models are deployed to the public, including information about user safeguards. Companies must continue to monitor their model performance and publicly disclose new developments as they relate to security and user safety. This information enables third-party evaluators to assess a model's risks and supports organizations, governments, and consumers in making more informed decisions.
It is critical that public disclosures and risk evaluations are comprehensive, consistent, timely, and responsive to emerging risks. We therefore request responses to the following questions by January 8, 2026:
Thank you for your attention to these matters.
Sincerely,
###