04/20/2026 | News release | Distributed by Public on 04/20/2026 14:28
On April 16, the Marymount Institute hosted "Democracy Dies in Misinformation," the last event in "Being Human in an Age of AI," a series that brought together scholars from Bellarmine College of Liberal Arts and College of Communication and Fine Arts. This event and the series highlight how faculty across disciplines are grappling with the many dimensions of artificial intelligence at a moment when AI is reshaping public life, the workplace, and higher education. Their conversations mirror a broader debate unfolding in news coverage and on social media about the growing value of distinctly human skills as AI becomes more pervasive.
Kate Pickert, associate professor of journalism, opened the panel by emphasizing the value of journalists: they continue to investigate, verify, and report on information that keeps democracy thriving, while doing the difficult work of debunking deepfakes. While trust in the news is at an all-time low (Gallup), and there's an increase in negative sentiment surrounding AI among Generation Z (Gallup), Picket relayed a positive spin on the relationship between the two. She suggested there might be a "boomerang effect" in play. Juxtaposing the "AI slop," audiences can better tune into the value of journalism. Also, when used appropriately, journalists can benefit from AI. Newsrooms are already using AI for aid in spelling, grammar, and transcription; to help develop story summaries, news announcements, and briefs; to generate audio from text, and analyze large data sets. Pickert suggested journalists can use AI without sacrificing integrity if they use proper disclosure. "For a long time, journalists just asked audiences to blindly trust us," she said. Most people don't understand the process of accountability and verification that journalists endure to ensure they are relaying accurate information. "The advent of AI gives us a new opportunity to establish trust," she said.
John Parrish, chief of staff to the president, VP for institutional strategy, and professor of political science, followed Pickert by reminding the audience that misinformation isn't new in politics; this is just the newest iteration of a tired story. What's new is the growth in scale and scope AI, which has overwhelmed our existing defensive capabilities. Parrish noted that AI technologies bear a resemblance to two other, older forms of "artificial intelligence" - the state and the corporation - to which we have also offloaded more and more of our moral agency. Parrish concluded, "No one will be in control of AI." Rather than try to block these new technologies, he suggested the best alternative is to build a new seawall for the tsunami. "Capitalism has a plan with AI that's at odds with the university's plan," Parrish said, thus higher education must ethically discern how we normalize aspects of AI.
Kai Prins, assistant professor of communication studies, raised the stakes, directing concern towards people who use AI to get answers about health and wellness. In their article, "Uncertain and Anxiously Searching for Answers: The Roles of Negative HealthCare Experiences and Medical Mistrust in Intentions to Seek Information from Online Spaces," Prins points to uncertainty anxiety as a contributing factor for such use. Prins, who has a strict no-tech rule in the classroom, is motivated to keep students away from AI. They shared the popular phrase, "Everything sounds like a conspiracy when you don't know how anything works," and pointed to the rhetoric companies use to sell their products as a selling point for this stance. Showing a recent Grammarly commercial, Prins argued companies present people as "hapless and hopeless dummies who need to outsource intelligence to make decisions." Further, Prins addressed OpenAI CEO Sam Altman's recent comment that AI will be sold as a utility in the future. Their explicit goal is to get you hooked on AI, then monetize it, Prins points out. With a 10% error rate, Google's AI overviews produce inaccuracies that could be devastating in the health and wellness industry. PR manipulates search, and content from bad or misinformed actors show up in results. For everyday consumers looking for answers about their health, AI poses a major threat.
Dan Speak, graduate director and professor of philosophy, concluded the panel with a message on civic virtue, claiming that normal AI usage undermines political intellectual autonomy. "I suspect that getting the right answers can oversimplify what we value about truth in democracy," Speak said. He argued democracy needs citizens with good intellectual character, and AI tools endanger citizens' ability to make truth on their own. Speak pointed to the philosopher C. Thi Nguyen's idea of "value capture," a phenomenon where a simplified metric is used to measure a complex subject. He explained, in value capture, we take a central component of our autonomy and outsource it. He considered grade point average as a metric - students can cheat to get the right answers to get a good grade, but then they will never actually learn the problem. For Speak, these truths deserve or require first person evaluation; to know via an AI output does not provide people with the tools to have sincere public discussion, which is core to democracy.
The discussion was facilitated by Elizabeth Drummond, associate professor of history and the director of the institute. Drummond emphasized the role of higher education: "A lot of talk on university campuses around AI has focused on questions of 'AI literacy' as necessary preparation for students to engage in the world of the future. But given what we've heard about AI and 'cognitive surrender' and 'virtue capture,' what does 'AI literacy' mean at a university that is committed to the formation of thoughtful citizens? When we encourage students to outsource so much of their thinking to AI, are we not forming them to be obedient subjects rather than free-thinking - intellectual autonomous - citizens?" Drummond suggested that LMU's Jesuit and Marymount traditions offer us an opportunity to redefine AI literacy in terms of students' acquisition of knowledge, skills, and ethics. Returning to the theme of the series, she said: "LMU's mission and its commitment to a humanities-based liberal arts education demands that we think not just about AI as technology but, even more so, consider what AI means for who we are as humans and for human intelligence and creativity."