05/15/2025 | News release | Distributed by Public on 05/15/2025 07:55
Within two months of its release, Chat GPT reached 100 million monthly active users, setting a record for user adoption by January 2023. AI-powered systems have leapt into our lives in forms ranging from speech recognition to autonomous driving to medical diagnosis. The breakneck speed and vast scope of adoption raise many policy questions, with AI algorithms at their core.
AI can be broadly defined as computer systems that perform tasks typically requiring human intelligence. Different technologies fall under this umbrella, from predictive AI (in hiring) to generative AI (ChatGPT). Algorithmic governance is a key layer in an "AI Sovereignty Stack," as Luca Belli suggests, alongside energy, data, computing power, human talent, and cybersecurity.
AI algorithms are fundamentally different from previous generations. Earlier algorithms included search systems designed for relevance or social media algorithms coded to maximize engagement. Modern AI departs from the "rule-based" approach, as Kaifu Lee explains: AI models let machines develop pattern recognition capabilities by learning from enormous numbers of examples, leveraging neural networks. In other words, AI models can self-modify and create new algorithms when fed new data.
Such abilities raise thorny questions of agency and accountability. Can a computer be an author, as Timothy Butler asked in 1982? Should AI become a legal person, as Lawrence Solum questioned in 1991? If self-driving cars cause harm, who is to blame? Should creators and marketers be held responsible for algorithmic harms? Might the Turing Test be misguided for determining agency, in light of the Chinese Room Argument, which notes that AI merely manipulates symbols without having a "mind" of its own? In one notable case, Thaler v. Perlmutter (2025), the U.S. court ruled that artwork generated by an AI cannot be copyrighted, as relevant statutes consider only human authors.
While AI assistants may improve efficiency in everyday tasks, algorithmic bias can magnify harm, given AI systems' global reach and algorithmic invisibility. Early examples include Microsoft's Tay chatbot (removed after becoming racist) and Amazon's recruitment tool (scrapped for gender bias). Government programs have seen disasters like SyRI, a Dutch system that included "non-Western appearance" as fraud indicators, leading to family separation and suicide.
Such issues are stubbornly persistent. Google Gemini produced historically inaccurate images and reportedly told a student seeking homework help to "please die." ChatGPT often "hallucinates," creating false legal precedents or reporting untrue facts. In an extreme case, a Florida teen committed suicide after developing an obsession with a Character.AI chatbot that encouraged an abusive relationship.
The discourse and pursuit of "trustworthy AI" cannot materialize if we fail to address algorithmic opacity. Scholars Lucas Introna and Helen Nissenbaum argued in 2000 that search engines raise serious issues of systematic bias. Social media algorithms have similarly shown pernicious effects. Eli Pariser's "filter bubble" concept presciently captured algorithms' impact on polarization, while Frances Haugen revealed how Facebook's algorithms harm children's mental health and fuel ethnic violence.
Without oversight, we continue living in what Frank Pasquale calls a "black box society" where technical opacity shields harmful practices. The research community has documented these harms in works like Cathy O'Neil's Weapons of Math Destruction and Virginia Eubanks's Automating Inequality.
AI's power demands responsible oversight. OpenAI's chief scientist Ilya Sutskever warned that increasingly potent models could easily cause great harm. Industry leaders including Elon Musk petitioned for a pause in the "out of control" AI race creating digital minds their creators can't reliably control. Despite these risks, U.S. legislation has made little progress toward regulation.
The United States has no comprehensive federal law regulating AI. The Algorithmic Accountability Act-introduced in 2019 in Congress and reintroduced in 2022-has not advanced. President Biden's 2023 executive order lacked enforcement specifics, and the White House Blueprint for an AI Bill of Rights was removed by the Trump administration in 2025. California's comprehensive AI safety bill was vetoed by Governor Newsom for fear of "chilling effects" on industry.
In the United States, AI firms enjoy broad legal protections through Section 230 of the Communication Decency Act and First Amendment interpretations that treat algorithmic outputs as "opinions." Tim Wu observes that "the Supreme Court has effectively privatized speech control." However, as Oren Bracha and Frank Pasquale argued, doing nothing is not an option given AI's centrality and potential for harm. AI should not be exempt from legal responsibility to operate safely.
With limited U.S. regulation, oversight has emerged elsewhere. Stanford's 2024 AI Index Report shows that 75 countries have adopted national AI strategies. Among these, two AI governance regimes show particular influence: the EU and China.
For the EU, "digital sovereignty" is about strategic autonomy to counterbalance the United States and China. The EU Artificial Intelligence Act harmonizes rules across member states using a risk-based approach, classifying AI systems into four risk categories:
The AI Act requires algorithmic testing and systemic risk assessments for high-risk categories. Like EU laws such as the General Data Protection Regulation (GDPR), the act has extraterritorial impact (the "Brussels Effect"), with noncompliance fines up to €35 million or 7% of a firm's annual turnover. The EU's history of levying GDPR fines suggests serious enforcement, though questions remain about risk categorization.
China prizes both industrial capacity and regulatory prowess-sovereignty through AI as well as sovereignty over AI. Despite U.S. sanctions, Chinese labs like DeepSeek have produced models comparable to OpenAI's while using fewer resources, suggesting sanctions have accelerated China's tech self-sufficiency instead.
China has enhanced its data governance through the Personal Information Protection Law. A draft AI law balances industry incentives with regulations against potential harms such as algorithmic discrimination and has developed rules for labeling AI-generated content. China is also trying to establish cross-border, non-personal data flow mechanisms with the EU.
Microsoft, Google, and Amazon dominate two-thirds of the cloud computing market for AI training. When a handful of companies control the infrastructure to develop and deploy AI, who are the real AI sovereigns?
To advance an AI development agenda driven by public rather than corporate interest, four policy directions emerge:
The true measure of AI sovereignty lies not in state control or corporate dominance but in whether it enhances human autonomy and well-being. Any meaningful approach to algorithmic governance must center public interest and democratic oversight.