NCSL - National Conference of State Legislatures

10/14/2024 | News release | Distributed by Public on 10/14/2024 10:27

Balancing Act: States Seek to Manage AI’s Risks and Rewards

Artificial intelligence can be an electronic content Cuisinart.

"You can ask it to analyze the pattern, similarities or differences, and you can do this in a hundred different languages. You can go beyond basic analysis of the documents and ask for key points, exciting moments, key quotes," Evi Fuelle, global policy director at Credo AI, told a session on artificial intelligence at NCSL's 2024 Legislative Summit.

Credo, which helps companies in financial services, insurance, health care and other sectors to develop responsible AI, is a sponsor of NCSL's Foundation for State Legislatures.

"You don't need anything other than an internet browser and your natural ability to ask questions to use generative AI," Fuelle says. "The questions you ask of a Gen AI system are known as prompts, and the results are what you have created or used Gen AI to create."

"You don't need anything other than an internet browser and your natural ability to ask questions to use generative AI."

-Evie Fuelle, global policy director at Credo AI

Prompt it with a few words or sentences and here come paragraphs, stories, poems, functional code, synthetic and realistic images, original music and video, she says.

Alas, for all the promise, there are concerns.

Generative AI, Fuelle says, is creating new and unknown risks that are difficult to predict and mitigate in advance.

"Things like impacts on financial markets; enabling students to plagiarize in school; recommending that you mix glue into your pizza recipes; exacerbating existing societal biases and discrimination, causing disruptions to the job market for both adopters and nonusers alike; enabling fake news, misinformation and voice and video scams or false and fake content on a massive scale," she says.

"There are a large variety and number of risks related to elections, including hyper-targeted voter suppression, language-based influence operations that make it easier to create content in any language and deep-faked public figures. AI risks can impact safety, privacy, human rights, the economy and more."

So, there's that.

Legislators and the federal government have been busily kicking AI's tires over the past two years.

"Eighteen states and Puerto Rico adopted resolutions or enacted AI legislation last year, and in 2024 more than a quarter of U.S. legislatures are considering bills regulating private sector use of AI," Fuelle says. (For details, see NCSL's Artificial Intelligence Legislation Database.)

Common legislative categories include government use of AI, algorithmic discrimination, automated employment decision-making, AI bill of rights or human rights protections, and deepfakes related to elections.

"Already, California has drafted regulations on automated decision-making technology, including the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act," known as SB 1047, Fuelle says. "It is one of the many AI-related bills currently active in state legislatures across the U.S. and introduces a number of requirements aimed at establishing safety standards for the development of large-scale AI systems, including third-party model testing, shutdown capability and annual compliance certification." (Update: California Gov. Gavin Newsom vetoed SB 1047 on Sept. 29.)

Fuelle also cites a recent Colorado law focused on high-risk AI systems dealing with employment, education, enrollment and lending services. It also includes a disclosure requirement of AI-related discrimination to the attorney general within 90 days after discovery.

Regulation, she said, was important so businesses can have certainty and surety to adopt AI at scale.

"It requires standards, requires guardrails, things that businesses can point to say, 'I have met the criteria for what good AI looks like and I am being accountable and transparent.' In our view, I think it's an enabler of innovation and it enables more companies, especially medium-size and smaller-size enterprises to know what the rules of play are."

Mark Wolf is a senior editor at NCSL.