11/15/2024 | Press release | Distributed by Public on 11/15/2024 12:10
Highlights
Proposed amendments to federal rules of evidence address authentication of evidence generated by artificial intelligence (AI)
If an AI output is offered as evidence, and that type of evidence would traditionally have required an expert witness' testimony, it would be subject to a Rule 702 analysis under the new rule
To authenticate evidence generated by AI, the proponent of the evidence would need to provide sufficient detail on training data, AI program used, and reliability of the AI outputs
If the opponent of a piece of evidence can "reasonably" demonstrate it has been altered by AI, the evidence would only be admissible under the new rule if its "more likely than not authentic"
The U.S. Courts Advisory Committee on the Federal Rules of Evidence has offered proposed amendments to rule changes to address the use of artificial intelligence (AI) in litigation. The proposed amendments would expand upon Rule 901 (Authenticating or Identifying Evidence) and would create a new rule - Rule 707, "Machine-generated Evidence."
The proposed amendments are included in the Agenda Book for the Committee's November meeting at pages 269-271.
Changes to Rule 901
Rule 901(a) provides that "to satisfy the requirement of authenticating or identifying an item of evidence, the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is." Subsection (b) then provides specific examples of the types of evidence that satisfy the requirements of section (a).
The proposed amendments would add language to the list of examples that describes what is needed to demonstrate the authenticity of evidence that is "generated by artificial intelligence." Under the amended rule, the proponent of such evidence would need to produce evidence that, among other "(i) describes the training data and software or program that was used; and (ii) shows that they produced reliable results in this instance."
Additionally, the proposed amendment adds a new section - subsection (c) - to directly address "deepfakes" and the burden for advancing or opposing evidence that is suspected of being "altered or fabricated, in whole or in part, by artificial intelligence." This section would include a two-step test, with a shifting burden, when the opponent of a piece of evidence alleges alteration or fabrication by artificial intelligence.
Initially, the opponent of the evidence must demonstrate "to the court that a jury reasonably could find" that the evidence has been altered. Upon such a showing, the burden then shifts to the proponent, and the evidence is "admissible only if the proponent demonstrates to the court that it is more likely than not authentic."
New Rule 707
The proposed amendments also seek to subject AI outputs to the same standard used to assess the admissibility of expert witness testimony, namely Rule 702. Under proposed Rule 707, "[w]here the output of a process or system would be subject to Rule 702 if testified to by a human witness, the court must find that the output satisfies the requirements of Rule 702 (a)-(d).
For example, a damages expert in a business dispute would normally look at factors that relate to the businesses performance and would apply those figures through a formula. The expert would then testify as to the reasonableness and reliability of the methodology. However, AI cannot testify for itself as to how it arrives at its output.
Therefore, if AI is used to calculate the final damages amount, then the proponent would need to demonstrate: a) that the output would help the trier of fact, b) sufficient facts or data were used as the inputs for AI program, c) the AI program used reliable principles and methods, and d) that the output reflects a reliable application of the principles and methods to the inputs.
To demonstrate that these requirements are met, the committee noted courts would consider what inputs are used, ensure that the opponent has sufficient access to the AI program to evaluate its functioning, and consider whether the process has been validated in sufficiently similar circumstances.
There are several purposes for subjecting AI-outputs to the standard of reliability applied to expert witnesses: to prevent against function creep, analytical error, inaccuracy or bias, and lack of interpretability.
The proposed amendment specifically exempts "basic scientific instruments or routinely relied upon commercial software, which the committee noted would include outputs of non-AI tools such as "a mercury-based thermometer, batter-operated digital thermometer, or automated averaging of data in a spreadsheet."
Takeaways
As the use of AI tools in litigation expands, courts and rules committees are addressing how best to manage the use AI generated information as evidence. These proposed amendments, which are still being considered, are designed to ensure the authenticity and reliability of evidence presented to the trier of fact. Legal practitioners should aim to stay up to date on all evidentiary rules that may impact when and how they may use certain AI outputs to prove their cases.
For more information, please contact the Barnes & Thornburg attorney with whom you work or William Carlucci at 973-775-6107 or [email protected], Kaitlyn Stone at 973-775-6103 or [email protected], or Nicholas Sarokhanian, chair of the Artificial Intelligence practice, at 612-367-8795 or [email protected].
© 2024 Barnes & Thornburg LLP. All Rights Reserved. This page, and all information on it, is proprietary and the property of Barnes & Thornburg LLP. It may not be reproduced, in any form, without the express written consent of Barnes & Thornburg LLP.
This Barnes & Thornburg LLP publication should not be construed as legal advice or legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult your own lawyer on any specific legal questions you may have concerning your situation.