02/26/2026 | Press release | Archived content
A version of the following public comment was submitted to both the Washington State Senate and the Washington State House regarding House Bill 2225 and Senate Bill 5984 on February 26, 2026.
We share the sponsor's goal of protecting minors from harmful interactions with artificial intelligence (AI) companion chatbots, but in the bill's current form, House Bill 2225 [and Senate Bill 5984] includes vague definitions and requirements that would make compliance a guessing game.
[Both bills] push chatbot platform operators toward age verification by imposing heightened duties once an operator "knows" a user is a minor, but the bill is vague about the steps a company should take to determine the age of its users. This ambiguity will likely incentivize companies to either over-collect personal information to reduce liability risk or restrict all users from accessing certain lawful services.Both bills rely on subjective standards for what content triggers liability, such as "suggestive dialogue" and "excessive praise designed to foster emotional attachment." Because these terms are not defined, they are likely to be litigated as questions of tone, context, and intent. And because violations are enforced through the Consumer Protection Act, leaving these lines to regulators and courts invites inconsistent outcomes and encourages overbroad restrictions that can sweep in even benign interactions.
To avoid such pitfalls, our team recommends policymakers prioritize clearly defined, objective, and testable requirements focused on concrete product features and safeguards rather than subjective judgments about the nature of conversational content. For example, the bill's requirement that the chatbot disclose it is not human, as well as the requirement to maintain a protocol for detecting and responding to suicidal ideation or self-harm, are far more concrete than attempting to regulate tone and emotional affect.
As an additional safeguard for fair enforcement, we recommend adding a narrow affirmative defense for good-faith operators who can demonstrate they provided the required AI disclosures, maintained and implemented the required crisis protocol, and made reasonable efforts to prevent the system from falsely presenting itself as human. This safe harbor would preserve strong deterrence for reckless or repeated misconduct while reducing litigation-driven overcompliance and encouraging companies to invest in the very safeguards the legislature is trying to promote.
In our view, reasonably attainable burdens of proof, clear, objective requirements on concrete product features, and a narrow affirmative defense for demonstrable good-faith compliance will ultimately create and foster the type of environment that protects minors and all users from harmful or misleading AI companion chatbot interactions while preserving innovation and consumer choice.