02/23/2026 | Press release | Distributed by Public on 02/23/2026 11:34
BOZEMAN - As libraries grapple with how to use artificial intelligence tools in responsible and ethical ways, Montana State University Library researchers are offering a new, free resource to help those institutions make decisions about AI that align with their values.
"Viewfinder: A toolkit for values-driven AI in libraries and archives" is designed to support librarians, researchers and others as they consider whether and how to implement AI tools in their libraries and archives. The toolkit is available to use and download at https://www.lib.montana.edu/responsible-ai/viewfinder/.
"I'm so excited about this tool our team has developed," said Sara Mannheimer, associate professor with the MSU Library and the project leader. "It's been really great to see the data and research come together."
Artificial intelligence can help libraries provide better services, Mannheimer said, including making materials more accessible, but using AI can also raise ethical questions. So, more than three years ago, Mannheimer and her colleagues began working on a project to help address those related ethical considerations.
In addition to Mannheimer, researchers include Jason Clark, Doralyn Rossmann and Scott Young with the MSU Library; Bonnie Sheehey with the MSU Department of History and Philosophy; Hannah Scates Kettler with Iowa State University; Yasmeen Shorish with James Madison University; and Natalie Bond with the University of Montana. The team's efforts have been backed by a $250,000, three-year grant from the Institute of Museum and Library Services. The grant is affiliated with the MSU Center for Science, Technology, Ethics and Society.
"We see AI as this potentially transformative technology for libraries," Mannheimer said in a 2022 news story about the team's work. "AI can provide services that users love - services that can make the library better - but we want to use AI in a way that is careful and with an eye toward potential harms so that those harms can be minimized."
In a recent interview, Mannheimer said the motivation for the work came about when she and her colleagues were beginning to use machine learning to summarize articles, recognize images and create metadata - or data that defines and describes the characteristics of other data. Machine learning is a subset of AI where computer systems learn and adapt without following explicit instructions by using algorithms and statistical models.
At the time, Mannheimer said, that was a common use of machine learning, but it led to some important questions.
"In libraries we have a very specific set of values and practices that have been longstanding; we really value patron privacy and intellectual freedom," Mannheimer said. "These are different values than corporations have, and someone developing AI technology would have different values than someone in a library. So, we were looking to use machine learning algorithms but from a librarian's perspective."
Mannheimer and her colleagues began their work with a review of the existing research findings, looking at case studies of librarians who had used machine learning in related contexts. They also did a review of different tools that were already available to help librarians and researchers make informed, ethical decisions.
"We saw that there was a need for a tool to support ethical decision-making as you're working through an AI project," Mannheimer said.
From there, the team hosted a series of workshops with library administrators, librarians and students and asked participants to identify what values would be at play for them when faced with various AI-related scenarios. They also asked workshop participants to prioritize the identified values.
Then, using some of the information gathered in the workshops, the team created a toolkit featuring scenarios and prompts to help facilitate ethical reflection about AI in libraries and archives from different stakeholder perspectives. It includes three different sections: scenarios, stakeholders and values. Those using the tool can use sample scenarios to generate thought and discussion, or they can create a customized scenario.
The toolkit invites users to identify stakeholders and consider which values are of concern to stakeholders. There are also prompts for reflection.
For example, one scenario invites users to consider using a third-party service that uses AI to generate brief summaries of research articles for students and faculty. It then invites users to identify three values that would be important in their consideration; examples include privacy and consent; fiscal responsibility; education; human-centeredness; trust; social responsibility; and service quality.
"People can use this tool individually or as part of a team working on implementing an AI project," Mannheimer said. "It helps think through potential stakeholder values and accommodate those if possible. It's more about balancing and encouraging conversations than having a right or wrong answer."
Rossmann, one of Mannheimer's collaborators and dean of the MSU Library, said she is excited about the possibilities that come with the new tool.
"As a next step, we will be working with libraries and archives to use Viewfinder in their everyday work," Rossmann said. "It's important to consider challenging questions at the beginning of AI projects, rather than as an afterthought, and this tool will help with that."
Mannheimer noted that libraries have a "unique responsibility" to maintain trust with their users.
"Libraries are often the first to practically implement technologies, and libraries are reliant on technologies for a lot of the services we provide," Mannheimer said. "As new technologies come through, it's our responsibility to understand them and integrate them so we can have the best services possible and keep up to date. Because of our longstanding values and position as a trusted institution, it's important to think very carefully about how these technologies affect our information landscape and our users.
"We have some power to decide how we use technologies in our lives," Mannheimer added. "Viewfinder is one way to actualize that power."