UCLA - University of California - Los Angeles

10/01/2024 | News release | Distributed by Public on 10/01/2024 13:24

If AI isn’t ‘learning’ like us, can it think like us

October 1, 2024
Share
Copy Link
Facebook X LinkedIn

Researchers at UCLA are on a quest to understand the internal workings of giant, artificial neural networks - like ChatGPT - that are kept under lock and key by companies that might have unleashed a new level of consciousness possessed by its AI. Will the machines outsmart humankind?

The statement above is not a blurb for a science fiction novel, rather, the essence of one area of study taking place in Keith Holyoak and Patricia Cheng's UCLA Reasoning Lab, where researchers use behavioral studies of human cognition to better understand computational and neuropsychological processing.

Holyoak, a distinguished professor of psychology and a poet, recently co-authored a study that found AI performed as well as college students in solving certain logic problems like those that appear on standardized tests, such as the SAT. The study poses the question: Is artificial intelligence mimicking human reasoning as a byproduct of its massive language training dataset, or is it using a fundamentally new kind of cognitive process?

If it sounds a bit dystopian, Holyoak wouldn't disagree. The psychologist, who is a leading researcher in the field of human thinking, acknowledges that AI is not going away and that it poses potential risks that must be mitigated. But Holyoak sees benefit in AI's existence, too.

In advance of delivering the 137th Faculty Research Lecture on Friday, Oct. 11, Holyoak spoke with us about large language models like ChatGPT, AI poetry and whether he thinks machines will achieve or surpass human intelligence, or even achieve "authentic" creativity and consciousness.

What do we know right now about how language learning models, or LLMs, are computing? How is it different from human thinking?

Large language models are extremely large artificial neural networks that pass information from an input layer through many intermediate layers on to a final output layer used to generate a response. LLMs take in enormous amounts of digitized text and are trained to predict the next word that will be presented. After training is complete, a user can provide a text prompt as input - ask a question - and the LLM will produce its own text as output - its answer.

Computer scientists and others are still struggling to understand exactly how LLMs are "thinking," and how the technology's operation resembles or differs from human thinking. Understanding LLMs' internal workings is especially difficult because the largest and best-known commercial models are proprietary - we don't know exactly what data was used to train them, and don't have access to their network structure. However, in the past year or so, some progress has been made using a growing number of open-source LLMs. One thing that's clear is that the way LLMs are trained - text prediction - is vastly different from how human children learn about the world from interacting with parents, other children and the physical environment.

What do we still need to find out about AI to get the whole picture?

Once scientists have access to the internal network structure of a trained LLM, they can start doing experiments like altering or deleting parts of the network and then observing how its performance changes. There is an emerging new field of "artificial computational neuroscience," in which LLMs are the objects being studied.

What contributions to society do you see machines making if or when they come into a new way of "thinking"?

Whether or not AI can come up with truly new ways of thinking remains to be seen. Machines are not truly autonomous. Unprompted, an LLM does nothing at all. So far, what we see is AI-assisted human thinking; a human poses the problem to the AI, and then curates the responses to select "good" solutions. Other new tools, such as the telescope or equipment for brain imaging, have expanded human capabilities. Similarly, AI can allow people to do things that would otherwise be impossible. Just to take one example from artistic creativity: The artist Refik Anadol, who received a master of fine arts degree from UCLA's design media arts program, is using machine learning to create mesmerizing "living paintings." The works turn data from brain images into constantly evolving images shown on enormous LED screens.

You're a published poet as well as a cognitive scientist. Have you read any poetry written by AI? If so, what were your thoughts?

Ever since ChatGPT debuted at the end of 2022, it's been commonly claimed that LLMs "write poetry." More accurately, the models write verse, typically using simple rhyme schemes. I've never read an AI-generated poem with any serious literary merit. Given that LLMs are trained to generate the most probable text completion, it's not surprising that the output is dominated by cliches and often are parodies. Of course, there's also a lot of bad poetry written by humans!

What do you hope people take away from the lecture?

My main message is that AI is here to stay and brings both benefits and risks. The challenge for people will be to imagine how it can be used for our collective benefit, and how its risks can be mitigated.

What was your reaction to being chosen to deliver the lecture?

I was surprised and humbled, given the enormous pool of faculty talent we have at UCLA. I did also think that my topic - the connection between AI and human thinking - is on the minds of many people these days.