New America Foundation

11/20/2024 | Press release | Distributed by Public on 11/20/2024 10:34

Open-Source AI Models Are Not Inherently Security Risks, But They Are Integral to Democracy, States New OTI Report

Nov. 20, 2024

WASHINGTON-As the International Network of AI Safety Institutes convenes for the first time today in San Francisco, the Open Technology Institute has published a new report urging the United States to encourage greater openness in the artificial intelligence (AI) ecosystem. The report states that bolstering openness is essential to shaping an AI ecosystem that serves democratic values and the public interest by strengthening the security of AI models and systems, sparking innovation, furthering transparency and accountability, and democratizing technical education and research.

"Mainstream commentary about AI security often equates greater model openness with greater risk, but these claims are imprecise and, in some cases, false," said Prem Trivedi, policy director of New America's Open Technology Institute (OTI) and co-author of the report. "In fact, a healthy amount of openness in the AI ecosystem can lead to better AI security at both the national level in the U.S. and globally. And-more broadly-openness in AI models is vital to making sure the technology serves democratic values."

The report recommends that policymakers, researchers, AI companies, developers, and civil society organizations take the following steps to ensure open AI models thrive in ways that serve democratic institutions:

Policymakers should

  • continue to build governmental capacity to monitor and mitigate the marginal risks posed by open models,
  • craft legislative and policy requirements that promote transparency about model design and governance,
  • encourage and incentivize developers and companies to build model interoperability that promotes standard communication protocols among models,
  • and avoid placing broad restrictions on open models.

AI companies should

  • embrace openness along multiple axes when developing models,
  • and participate and/or invest in maintaining open-source AI projects in order to ensure popular model projects have the resources they need to find and address vulnerabilities in a timely fashion.

Researchers should

  • engage in comparative studies of the organizational structures and practices of teams developing open source models,
  • identify areas of research that the private sector is unlikely to undertake,
  • and articulate use cases where private companies or AI labs are unlikely to develop use cases because of a lack of commercial interest.

Developers should

  • use best practices for software development, particularly security practices, that promote both secure code and better transparency and insight into a model's structure and training
  • and study and experiment with designing open protocols and standards for moving data between models, primarily so that models can interoperate more easily.

Civil society should

  • creatively explore the ways in which openness can further democratic accountability and other public-interest objectives
  • and invest in in-house AI expertise to enable critical oversight of models, open or closed, that is based on better hands-on understanding of how these technologies work.​

OTI's report identifies five key attributes of openness for AI models and introduces a tool to help stakeholders of the AI ecosystem understand how open different AI models are. The report also explains how promoting the key attributes of AI openness can lead to benefits in security, innovation and competition, public transparency and democratic accountability, and education and research.

"To achieve AI development that better aligns with democratic values, we must shape an AI ecosystem that allows open models to thrive alongside proprietary ones," said Nat Meysenburg, a technologist at OTI and co-author of the report.

Read the report.

Related Topics

Artificial Intelligence