UCSD - University of California - San Diego

10/24/2024 | Press release | Distributed by Public on 10/24/2024 17:23

UC San Diego Alumna Evaluates Ethics in AI Algorithms

Story by:

Published Date

October 24, 2024

Story by:

Topics covered:

Share This:

Article Content

This article originally appeared in the fall 2024 issue of UC San Diego Magazine as "Ethical Algorithms."

For all the wondrous achievements of artificial intelligence - speeding up the development of pharmaceuticals and the detection of extreme weather - the fear of this new technology is unrivaled.

Questions about the nefarious use of AI abound: Could it be used to engineer pandemics, amplify misinformation or affect the outcome of a presidential election?

According to social scientist and AI ethicist Rumman Chowdhury, PhD '17, the dual elements of good and evil are not surprising when you consider a simple truth: "AI models are trained on the content of the internet." That content includes existing texts, images, sounds and videos humans produce. Yet, humans are fallible, meaning more opportunities for implicit and explicit bias.

"Bias comes from the very basic data that the models are trained on," says Chowdhury. "The effects are just manifesting preexisting cultural and social biases that exist in society today."

For instance, consider a recent AI search for images of an "African doctor." Instead of offering an image of a doctor who is African, the results yielded images of a white doctor treating Black children. This example underscores the idea that the output quality is based on the input or training data.

"While AI companies may say we are 'trained on the world's data,' that's untrue because there are entire swaths of the world that just don't participate in the internet," she says. And when it comes to language, "other languages are not represented on the internet as readily as English."

Chowdhury, former head of AI research at what was then Twitter, was named to Time magazine's list of the most influential people in AI in 2023, and Wired called her one of seven "humans trying to keep us safe from AI."

She is also the U.S. State Department's envoy for AI, a member of the U.S. Homeland Security Department's Artificial Intelligence Safety and Security Board and a member of New York City's AI Steering Committee. At the federal level, she interacts with counterparts from other countries, showing them what the U.S. offers on AI and trying to get developing countries more active in this field. In addition, she helps advise on ways in which the U.S. government could protect its critical infrastructure from AI attacks.

Chowdhury is also CEO and co-founder of Humane Intelligence, a tech nonprofit building a community around algorithmic evaluation, focused on evaluating AI systems to make them effective, fair and unbiased.

According to Chowdhury, only some people know how to effectively evaluate the algorithms that are the building blocks of AI. "We often forget that testing AI is one of the most important things we need to do," she says. Humane Intelligence also offers prizes - a "bias bounty" - to people who can identify and uncover embedded prejudices and solve algorithmic bias problems in AI models.

Through her nonprofit and government-related work, Chowdhury aims to influence the development of AI so that it is ethical and inclusive.

Chowdhury says her time at UC San Diego, advised by Steven Erie, now a professor emeritus of political science, taught her how to methodically evaluate how institutions could create an imbalance of power by amassing technology, water or other resources. That experience helped lead her to want to engage in AI.

"The ability to critically look at these big institutions, like social institutions, for-profit institutions, government institutions and the interplay between them is important, but at the end of the day, your concern should just be about the average person," she says.

While change can often bring anxiety and uncertainty, innovations can make a difference and perhaps help people better understand themselves, society and their own biases.

"Bias comes from the very basic data that the models are trained on. The effects are just manifesting preexisting cultural and social biases that exist in society today." Rumman Chowdhury, PhD '17

Topics covered:

Share This: