Cornell University

04/15/2026 | Press release | Distributed by Public on 04/15/2026 11:34

‘Moonshot’ project aims to restore trust in the digital public sphere

Cornell researchers have received a seed grant for $250,000 and a chance at a $10 million award from the Laude Institute to support a five-year "moonshot" project aimed at using artificial intelligence to establish a new foundation for trustworthy AI-mediated communication across online platforms.

As AI agents increasingly influence our online conversations - shaping how information is presented, interpreted and debated - a cross-college team seeks to ensure this influence remains transparent, independently verifiable and aligned with the public interest. The team will develop tools and protocols that allow platforms and users to see and verify how AI systems shape conversations. The long-term vision is a new communication ecosystem in which AI can assist and mediate discussions at scale.

"The online experience is being reconfigured by the appearance of AI-powered conversational agents in almost every dimension," said principal investigator Jon Kleinberg '93, the Tisch University Professor of Computer Science and Information Science in the Cornell Ann S. Bowers College of Computing and Information Science. Already, someone might interact online with a person using an AI-writing assistant, a bot, or an AI agent acting on their behalf, with interactions governed behind the scenes by an AI moderator, he said.

"Right now, our shared public sphere is entering a crisis of trust," said co-investigator Cristian Danescu-Niculescu-Mizil, associate professor of information science in Cornell Bowers. "People no longer believe in what they read, they don't know who they are talking with, and perhaps more importantly, they don't trust the AI systems that are mediating the exchanges."

By combining verification techniques inspired by cryptography with insights from communication and social science theory, the team will develop the technical and social infrastructure for a trustworthy civic discourse environment. The goal is not for everyone to agree or trust each other, but to ensure the environment itself supports meaningful and trustworthy interactions, even in the presence of AI.

Researchers compared the vision to ride-sharing platforms: People routinely get into cars driven by strangers because the system provides mechanisms that make the interaction verifiable and dependable.

Along with Kleinberg and Danescu-Niculescu-Mizil, co-investigators are: David Rand '04, professor in Cornell Bowers, the Cornell SC Johnson College of Business and the College of Arts and Sciences; Natalie Bazarova, M.S. '05, Ph.D. '09, associate vice provost for research and professor of communication in the College of Agriculture and Life Sciences; Robert D. Kleinberg, professor of computer science in Cornell Bowers; and Mor Naaman, the Don and Mibs Follett Professor of Information Science at Cornell Tech, the Jacobs Technion-Cornell Institute and Cornell Bowers.

In addition to AI assistants that support individual participants and AI mediators that guide group discussions, this project introduces an entirely new civic building block: an AI auditing layer that produces a transparent record of each agent's interventions, enabling independent verification of whether an agent's behavior aligns with its declared roles and standards. Together, these components establish a new foundation for online communication - one in which AI can actively participate in discussions while remaining continuously transparent and subject to independent verification.

"We have previously shown that AI models are highly persuasive, which is sometimes helpful, such as for debunking conspiracies, and sometimes manipulative, such as in swaying political opinions," Rand said. "To protect people without sacrificing AI's benefits, we're building an auditing infrastructure that reveals a model's goals and intentions, enabling users to distinguish helpful models from manipulative ones."

The seed grant is one of eight awarded by the Laude Institute as part of its Moonshots research competition, which invites researchers at the forefront of AI to tackle some of humanity's most difficult problems. Over the next six months, the eight teams will compete to develop an initial product and fully scoped proposal for a $10 million Moonshot lab, with awardees to be selected later this year.

Moonshots is a flagship initiative of the Laude Institute, which was founded by Andy Konwinski - a computer scientist who co-founded Databricks, a cloud-based platform for data analytics, and Perplexity, an AI search engine - to accelerate the transfer of frontier computer science research into real-world impact. The Moonshots program is chaired by Dave Patterson, professor emeritus of computer science at the University of California, Berkeley, and a leading figure in the field.

"Moonshots was built on a simple premise: that the most consequential AI researchers in the world should be the ones shaping how AI gets used, and that they deserve the resources to think at the largest possible scale," Patterson said. "What came back from 600 researchers across 47 institutions exceeded everything we hoped for. This is what open academic research looks like when it's allowed to be wildly ambitious."

"The Laude Institute's innovative funding model is exactly the kind of forward-thinking approach we need to leverage AI's potential for the public good," said Provost Kavita Bala, professor of computer science in Cornell Bowers. "Their support of this project, which exemplifies Cornell's commitment to addressing society's most pressing challenges, will transform the team's visionary ideas into AI that enhances our humanity."

In the first six months, the team will develop a conversational arena and framework to evaluate and compare AI assistants and mediators - developed both internally and by external researchers and partners. Evaluation will proceed in stages, from simulated interactions to real human conversations.

Additionally, a dedicated "adversarial track" will be added for stress-testing the auditing layer, inviting AI agents designed to simulate covert attempts to manipulate discussions while only appearing to adhere to their publicly declared roles.

"If this is successful, it can fundamentally transform how we as humans are able to communicate in the digital sphere, enabling more meaningful, constructive, and trustworthy interactions, even across deep disagreement," said Danescu-Niculescu-Mizil. "Six months is a very short turnaround time, but that is also what makes it exciting: It brings together a team with complementary expertise that wouldn't otherwise have the opportunity to collaborate, allowing us to focus our efforts and attention on one ambitious project together."

Rene Kizilcec, associate professor of information science in Cornell Bowers, leads a team that aims to help workers transition to new positions in the AI economy by using AI to transform job duties into practice activities for job candidates ahead of an interview. Additional team members are: Michèle Belot, the Frances Perkins Professor of Industrial and Labor Relations and Professor of Economics in the ILR School and College of Arts and Sciences; Thorsten Joachims, the Jacob Gould Schurman Professor in the departments of Computer Science and of Information Science in Cornell Bowers; JR Keller, associate professor of human resource studies in the ILR School; Philipp Kircher, the Irving M. Ives Professor of Industrial and Labor Relations in the ILR School; and Rachel Slama, associate director of the Future of Learning Lab in Cornell Bowers.

Rachee Singh, assistant professor of computer science in Cornell Bowers, leads the second group, which proposes to develop AI agents that can train surgeons and assist them during remote telesurgical procedures. Her team includes Emaad Manzoor, assistant professor of marketing in the SC Johnson College; Mischa Dohler of Ericsson Inc.; and Dr. Ryan Madder, a cardiologist at Corewell Health.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Cornell University published this content on April 15, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 15, 2026 at 17:34 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]