Brown University

04/24/2026 | Press release | Distributed by Public on 04/24/2026 11:24

In visit to Brown, Jaron Lanier says people are thinking about AI all wrong

PROVIDENCE, R.I. [Brown University] - Celebrated scientist, futurist and writer Jaron Lanier thinks most people have the wrong idea about what artificial intelligence is - and especially the people in Silicon Valley who are developing new AI systems.

"Normally we talk about AI as a thing," he said during a standing-room-only lecture at Brown University. "It's this object that's out there. The AI did this; the AI did that. But there's another way [to think about it], which is to say, no, it's a collaboration of humans."

AI language models are trained on vast amounts of text created by scientists, writers, thinkers, entertainers and more. Those human contributions, Lanier says, should not be erased from the output of language models. That erasure is what gives people a perception of AI as "this new super alien angel who's going to come and save us or kill us" rather than what it really is or could be: "a new, very high-level, very large-scale form of cooperation [that] is even more glorious than having a big alien angel. I think it's really cool."

Lanier's comments came as part of a Thursday, April 23, talk and musical performance in Brown's Engineering Research Center. The event was the second annual Leon Cooper Lecture, sponsored by the Brown Center for Theoretical Physics and Innovation, and the Office of the Provost. The series honors Cooper, the late Nobel Laureate and polymath professor of physics at Brown, by bringing speakers to campus with unique perspectives that cross traditional academic boundaries.

Lanier is considered one of the founding scientists in the field of virtual reality. In addition to his own ventures, he has worked at tech giants like Atari and is currently the prime unifying scientist at Microsoft Research. He is also an outspoken critic of Silicon Valley who has argued that people should be paid for their contributions to software platforms like Google and that social media is a malignant and manipulative force that people should immediately stop using.

In his lecture at Brown, Lanier stressed that his criticism of AI is not about the technology itself, which he considers to be useful and potentially important.

"This is a criticism of all the surrounding stuff - the cultural, psychological, spiritual, economic and political conundra around it sucks," he said. "That's what needs to be changed. The actual tool, the actual thing at the core, I'm kind of down with it. I think we're doing something useful there."

Lanier says that the way in which AI aggregates and sifts through vast corpuses of human knowledge does have the potential to lead to new understanding and new scientific theories about the world. But it's critical to remember that those potential insights stand on the shoulders of real people who did real work.

"Part of the tradition of science, which is really crucial, is citations and references," he said. "We don't erase each other. We need that chain - not just because it's fair, not just because it keeps our enterprise going, not just because it's decent and ethical - but because without that chain of thought, we can't have core reality."

Lanier said there's an interesting scientific question surrounding AI that has been largely unaddressed, despite sitting right under everyone's noses: Why does the manipulation of language enable the creative potential of AI?

"It's essentially doing statistics on word order and distance in a big amount of data, and then using that to create a new thing a that's informed by it," Lanier said. "Isn't it surprising that natural language can support this rather simple thing and get this result? Shouldn't we be thinking to ourselves, 'Wow, there's kind of more going on with natural language than we realize?'"

As for students who might be tempted to let AI write papers and assignments for them, Lanier's advice was simple: "Don't do it. It's bad for you."

True to his reputation for varied talents and interests, Lanier's speech was followed by a musical performance. Stephon Alexander, a physics professor at Brown and an accomplished tenor saxophonist, joined Lanier along with Providence-based Latin percussionist Jesús Andujar and bassist Donnie Aikins from Newport.

During the eclectic jazz performance, Lanier played a khaen, an ancient flute-like instrument from Southeast Asia. He described a chain of innovation, starting with khaen, that led to more modern instruments like the piano and eventually the automated player piano. The punch-card driven automation of the player piano, Lanier said, was an inspiration to Charles Babbage, who is credited with developing the concept of the digital computer.

"So this was the origin and the thing to blame for it all," Lanier said with a laugh as he held up the khaen. "This is it. One more song?"

Brown University published this content on April 24, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 24, 2026 at 17:24 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]