International IDEA - International Institute for Democracy and Electoral Assistance

05/07/2026 | News release | Distributed by Public on 05/07/2026 08:24

From pilot to policy: how electoral bodies are responsibly adopting AI

Back to overview

From pilot to policy: how electoral bodies are responsibly adopting AI

Though artificial intelligence is still a maturing technology, it is already finding purchase across many national electoral authorities. While most applications are still low-risk, an increasing number of electoral management bodies (EMBs) have begun exploring how to integrate more advanced forms of AI into election administration throughout the electoral cycle and how to adapt their operations for the AI era. These novel uses are not just tacked onto existing processes. Rather, they are used to reimagine core parts of electoral architecture and communication in ways that could significantly improve voter services and solve complex administrative challenges. At the same time, they can produce new risks, especially when integrated into core election processes or public-facing applications, such as voter roll management, vote counting and tabulation or voter outreach. Concerns range from those tied to the design of the AI itself-that it may be biased, discriminatory, or misrepresent facts about the electoral process in ways that risks disenfranchising voters-to fears around path dependency and uneven institutional adoption, often due to limited AI literacy and capacity at an organizational level.

As EMBs expand their AI integration, risks are no longer hypothetical. In response, EMBs are rethinking their strategies and updating policies to ensure that AI enhances electoral administration without jeopardizing democratic principles. However, since electoral AI is still largely uncharted ground, precedents and best practices are few and far between, meaning that nearly every EMB has a unique approach to AI adoption and regulation. To capture these diverse institutional shifts, International IDEA has surveyed EMBs across the world, documenting their different approaches to AI as an election technology.

International IDEA, in collaboration with Microsoft, Arizona State University (ASU) and The Elections Group (TEG), has launched the AI + Elections Clinic Skills Hub, part of the Mechanics of Democracy Lab hosted by ASU. It is envisaged as a central knowledge repository of best practices and use-cases, empowering electoral actors to responsibly approach previously unexplored AI-driven election technologies.

The Skills Hub features case studies authored by International IDEA and based on interviews with electoral authorities, covering the distinct AI use cases and governance structures in Estonia, Mexico, the Philippines, Pakistan, South Africa, Albania, Nigeria, the UK, and Australia. These studies, alongside the Clinic's other materials, serve as capacity-building peer-exchange resources for EMBs seeking to harness AI innovations to reduce administrative burden, improve electoral processes, and redefine how elections are executed by EMBs and experienced by voters, all while reinforcing democratic principles. The case studies impart key lessons on how to strengthen AI-readiness while avoiding pitfalls, helping EMBs learn not only how to leverage AI, but also how internal procedures, staff programs, organizational structure, and AI policy can be developed to support the integration of democratic AI tools.

The Skills Hub expands on International IDEA's extensive work on AI in elections governance. This includes its Artificial Intelligence for Electoral Management report-which provides a knowledge base on the potential opportunities and challenges of AI use for electoral management-and its AI for Electoral Actors Workshop series, conducted across five countries and aimed at enhancing institutional capacity for the measured use of AI in elections. This website takes the next step, exploring EMBs' real-world experiences in AI integration.

While EMBs may share similarities in terms of their current AI usage and governance structures, each case is distinct and highlights core ideas relating to the practical realities of AI implementation. Albania and Australia offer two interesting examples that cast light on the complex and often unpredictable nature of AI deployment, while also highlighting how a robust governance architecture and real-world testing are vital for responsible AI integration.

Albania

Albania's Central Electoral Commission (CEC) follows an empirical approach to the implementation of any new technology into its operations. In line with this perspective, the CEC follows a set testing protocol for all new AI technologies that mandates rigorous prototyping and evaluation with strict human oversight before solutions are deployed at scale. During its 2025 partial mayoral elections, the CEC tested a new AI-based image analysis tool for vote tabulation. Tested for the first time in a real election environment, the CEC's experience with this tool provides an interesting example that reinforces the importance of practical pilot testing.

The image-analysis tool scanned ballots handled by polling station staff to accelerate vote counting and results transmission. Since this was its first legally mandated pilot test, it was only deployed in 3% of polling locations and the manual counting process was still used in parallel. After initially performing well with consistent accuracy, the oversight staff ran into two unexpected issues due to the physical conditions in the polling booths. First, the system tended to count votes twice if ballots were held up for too short a period of time. This problem became more apparent later into election night, as tired poll workers held up the ballots too quickly for the system to function correctly. Second, the AI had difficulties recognizing ballots with even the slightest inconsistencies, meaning valid ballots were often not counted. This issue became especially salient after the staff's regular handling of ballots caused their markings to grow faint, leading to a deterioration in the system's reliability over time.

This example illustrates an important lesson, that many of the potential threats posed by novel technologies are not fully knowable before their practical implementation. While pre-evaluations and risk assessments are a crucial step in developing AI tools, there are bound to be unforeseeable conditions learned in the field that significantly impact system validity and reliability. For the CEC, this experience reinforced their view that careful AI implementation requires proper prototyping and real-world testing, so that risks can be identified and addressed before they have a chance of inflicting serious harm on the integrity of elections.

Australia

The Australian Electoral Commission's (AEC) approach to AI integration is deliberately conservative, with core electoral administration and voting strictly conducted using analog methods. While the AEC maintains clear lines to protect the integrity of core processes, it is exploring AI tools to improve administrative efficiency, internal productivity, communications, and voter services. At the heart of these initiatives lie not only the technical components of developing applications, but careful consideration of how institutional design and regulation must evolve in tandem to establish support and safety guardrails for new AI tools.

To facilitate this measured approach, the AEC is drafting internal AI guidelines and strategy while aligning with the Australian government's growing democracy stack governing AI in the public sector. This multilayered system, which the AEC must navigate when considering a new AI use case, spans governance frameworks, assurance mechanisms, ethical principles, and technical standards. These governance instruments not only set rules and boundaries for AI use but provide resources to help public agencies build their capacity to leverage AI-including training programs and a government-owned sandbox environment to test AI tools without relying on third-party services. Such resources address the fact that many public agencies are not only facing a lack of policy frameworks which govern AI use, but also lack specialized technical standards, resources, tools, and expertise that are necessary to realize AI projects.

In parallel with government-wide AI policy, the AEC has set up internal support structures to navigate this multilayered governance framework and facilitate the rule-bound development of AI tools. First, the Commission has appointed two government-mandated supervisory roles, including the Chief AI Officer, who leads the AEC's efforts in AI development, and the Chief AI Accountable Officer, who oversees responsible AI usage across the AEC. Second, the AEC has established an AI working group, a dedicated forum to discuss, develop, and test ideas for new AI applications. It is open to all staff, making it an important forum to ensure system development is internally transparent. Its broad mandate includes providing guidance to staff on AI procurement and improving AI awareness and literacy, thereby securing a participatory and even adoption process.

Transparency is the core accountability mechanism throughout this system. Commonwealth agencies are obligated to publish regularly updated transparency statements explicitly detailing which AI-tools they use, for what purpose, and under what conditions. By being open to public scrutiny by design rather than by choice, the AEC's AI footprint remains responsive to the public and to its democratic interests, reducing the risk of misaligned governance within the Commission and adding an additional layer of safeguards against problematic AI adoption. Australia's AI governance stack, including its transparency requirements, highlights how a layered policy infrastructure-rather than a single regulation-can strengthen AI accountability and implementation by ensuring that all relevant stakeholders, including the public, are considered during the development of new applications.

Why this work continues to be important

The experiences of the nine EMBs highlighted in the AI + Elections Clinic Skills Hub are just the tip of the iceberg. Over the span of just a few years, many of the AI use-cases featured in the case studies have gone from inconceivable to a practical reality, and as the sophistication of AI technologies continues to grow, AI uptake is likely to follow the same trajectory. However, these steps are not taken in the dark. EMBs are acutely aware of the inherent risks that accompany AI use in the high-risk context of elections-ranging from systemic errors to discrimination and disenfranchisement. This awareness is clearly reflected in the universal shift away from discrete and sporadic AI adoption to more developed AI strategies that not only establish norms delineating how, when, and why AI is used in electoral management, but also imagine the substantive ways in which AI can transform different aspects of EMB operations.

If risks are not adequately addressed through institutional adaptation, the integration of any advanced AI functions into core election activities may not only cede direct control over certain processes but could make the decision-making processes behind generated outcomes highly opaque. This raises concerns about public trust and legitimacy, undermining the integrity of elections and democracy writ large. Therefore, while EMBs should continue to explore new avenues for AI implementation, they should maintain a clear focus on public transparency and create clear pathways for public accountability and responsibility.

The AI + Elections Clinic Skills Hub serves as a tool to encourage this transparency-first approach. It is a venue for the sharing of AI-driven innovations, and just as importantly, it promotes institutional accountability by making EMBs' experiences with AI visible to peer institutions and to the broader public. This is an important step through which EMBs can garner democratic legitimacy for their decision-making.

Far from prescribing a universalist approach to AI adoption, these case studies emphasize how each individual country can devise their own AI stack that concurrently respects their local norms and unique national context while still upholding universal principles for ethical AI that align with expert assessments, human rights, and democratic values. As EMBs continue to come up with imaginative AI-based solutions to electoral issues, it is important that these ideas be collected and shared openly so that these potentially high-impact developments can be understood and learned from. This website will continue to collect and platform these unique experiences.

If you're curious about EMBs' experiences with AI and how AI policies are being developed as guardrails, read the case study series in the AI + Elections Clinic Skills Hub.

Share

About the authors

Cecilia Hammar - Programme Assistant, Digitalization and Democracy
Programme Assistant, Digitalization and Democracy
Cecilia is a Programme Assistant of the Digitalization and Democracy, Global Programmes, in Stockholm. Cecilia joined International IDEA in January 2024. Cecilia contributes to the Digitalization…
Oscar Brown - Intern, Digitalization and Democracy programme.
Intern, Digitalization and Democracy programme.
Oscar Brown is an intern at the Digitalization and Democracy programme.
International IDEA - International Institute for Democracy and Electoral Assistance published this content on May 07, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on May 07, 2026 at 14:24 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]