04/02/2026 | News release | Distributed by Public on 04/02/2026 12:27
The environmental impact of AI is becoming harder to ignore, from soaring energy use and water consumption to the rapid expansion of data centres and microchip production. What is being built in the name of innovation is also concentrating power, intensifying surveillance and deepening democratic risk.
The AI boom is being sold as inevitable progress, but the real question is not whether artificial intelligence can do useful things in theory. It is who owns it, who profits from it, what it is mostly being used for, and who pays the environmental and political bill when the hype turns into microchip manufacturing plants, data centres, rising power demand, water stress, surveillance and attacks on democratic life.
A Greenpeace Germany report released in 2025 warned that AI's electricity demand, emissions, water use and raw material needs are all rising fast, and that AI data centre electricity demand could be 11 times higher in 2030 than in 2023 unless governments intervene. A February 2026 report backed by Beyond Fossil Fuels made the greenwashing problem even clearer, finding that 74% of industry claims about AI's climate benefits were unproven and that it could not identify a single case where consumer generative AI systems such as ChatGPT, Gemini or Copilot were delivering material, verifiable and substantial emissions cuts.
This matters because it punctures one of the sector's favourite talking points, namely that energy-hungry generative AI can be excused by vague future climate benefits. In reality, the buildout itself is locking in more extraction, more infrastructure and more corporate power, while the largest firms try to present that expansion as climate leadership.
That is why the debate cannot be reduced to whether AI might do good one day, because the system being built right now is already redistributing power upwards while pushing environmental costs and information risks outwards.
Across different countries, people are fighting data centres not because they are anti-technology, but because they recognise the pattern: land grabbing, noise pollution, pressure on water systems, strain on local grids and the steady erosion of community control over land and infrastructure. In New Brunswick, New Jersey, city leaders removed data centres from a redevelopment plan after public backlash and restored a park requirement, while residents and campaigners explicitly raised concerns about environmental harm, energy consumption, water use and noise pollution. In San Marcos, Texas, the city council voted 5-2 to block a proposed data centre after an hours-long meeting and more than 100 public comments.
March 2012: An aerial view of the Facebook Data Center in Forest City. This 150-acre facility was the second Facebook-built data center in the United States.
In September 2025, South Dublin County Council in Ireland passed a motion calling for a nationwide ban or moratorium on new data centres, or strict conditions including 100% renewables, amid concern that communities are being forced to absorb the economic and ecological costs of someone else's digital expansion. In the UK, campaigners won permission for a legal challenge against a 90MW hyperscale data centre in Buckinghamshire after the government admitted it had made a "serious error" in approving the scheme.
South Africa shows the growing disconnect between the push for AI infrastructure and the ecological realities of water stress and climate disruption. Australia, meanwhile, shows how rapidly this model is being scaled up globally, with the world's second-biggest data centre buildout after the United States.
These are not fringe skirmishes. They are early signs of a broader democratic backlash against a model of digital expansion that expects local communities to absorb the costs while distant corporations and billionaires bank the gains.
Resistance is also becoming cultural, not just local. The QuitGPT boycott has gained traction as a symbolic rejection of the idea that ChatGPT should become the default interface for work, knowledge and everyday life. The movement is explicitly a reaction to OpenAI's deal with the US Department of Defense, and it took on added urgency as the US and Israel began bombing Iran almost immediately afterwards. Dutch historian and author Rutger Bregman has helped amplify it by urging people to cancel their subscriptions, first pointing to more than 700,000 supporters, then more than one million. More than 2.5 million users are now boycotting ChatGPT.
The opposition to OpenAI and ChatGPT is no longer confined to specialists but is reaching writers, organisers, educators and mainstream audiences who are starting to question what exactly they are being asked to normalise.
If you want to understand why campaigners are increasingly focusing on chips as well as chatbots, start with Nvidia, the American chipmaking giant, and its CEO, Jensen Huang. Nvidia announced a staggering annual revenue of US$ 215.9 billion, underscoring just how central the company has become to the global AI boom. Recent earnings show Nvidia's business is now dominated by data centres and AI chips, not gaming, with roughly 80% to 90% of revenue coming from data centres while gaming has fallen below 10%.
Huang has framed AI as "the largest infrastructure build-out in human history" and as foundational infrastructure for the modern world, which is precisely why Nvidia cannot be treated as a passive supplier standing outside the social and ecological consequences of the boom. Without Nvidia's chips, much of the present generative AI race simply would not happen at its current scale.
March 2026: On the opening day of Nvidia's GTC (Global Technology Centers) conference, Greenpeace USA drove a triple-billboard truck to deliver a direct message to CEO Jensen Huang: 'Hey Jensen, your graphics processors that are fuelling the AI boom are overheating. So is the planet.'Greenpeace East Asia's October 2025 findings rank Nvidia last on AI supply-chain decarbonisation and argue that the company's record revenues are being built on a "decarbonisation deficit" outsourced to suppliers in Taiwan and South Korea that still depend heavily on fossil power.
Greenpeace East Asia's reporting also highlighted a 4.5-fold increase in emissions from AI chip manufacturing in a single year, showing how quickly the environmental cost of this infrastructure race is escalating. This is not a side effect of the boom. It is part of the industrial model that underpins OpenAI, Anthropic, Amazon and the wider rush to scale generative AI as fast as possible.
Amazon tells a similar story. Jeff Bezos's Amazon made more than US$ 77 billion in profits in 2025 while cutting around 30,000 workers as it ramped up AI spending. This is what "innovation" looks like when it is steered by monopoly power: record profits, job cuts, rising capital expenditure and a false promise that more automation will somehow trickle down into public good.
The political economy of the AI boom should worry anyone who cares about democracy and civil liberties. Tech leaders and companies spent heavily to curry favour with Donald Trump after his reelection, including OpenAI chief executive Sam Altman's US$ 1 million donation to Trump's inauguration fund, while reporting also tied OpenAI co-founder Greg Brockman to a US$ 102 million Trump war chest drive.
Palantir and Alex Karp have gone further into the architecture of state power. ICE agreed to pay Palantir $30m to build its "ImmigrationOS" surveillance platform, while Karp defended the company's work with ICE and later said critics of ICE should be protesting for "more Palantir", not less. That tells you a great deal about what counts as "progress" when AI, border violence, data extraction and executive power converge.
June 2014: A coalition of grassroots groups from across the political spectrum joined forces to fly an airship over the NSA's data center in Bluffdale, Utah to protest the government's illegal mass surveillance program. Greenpeace flew its 135′ long thermal airship over the data center carrying the message "NSA Illegal Spying Below".
The debate over AI and war has become sharper too. Anthropic reportedly sought explicit contractual prohibitions on mass domestic surveillance and fully autonomous weapons, and has been in conflict with the Pentagon over refusing to broaden those terms, while OpenAI struck a Pentagon deal for classified systems and revised it only after backlash, adding stronger restrictions against domestic surveillance and autonomous weapons without human oversight. That does not make Anthropic harmless, but it does show that even inside this industry there are real fault lines over how far companies are willing to go in militarisation and state surveillance.
Amnesty International has called for bans on AI-based practices including public facial recognition, predictive policing, biometric categorisation, emotion recognition and migrant profiling, while Forbidden Stories has investigated firms pitching AI-enabled surveillance tools that can target journalists, dissidents and activists.
Culture and information are being reshaped at speed as well. Deezer says it is now receiving more than 60,000 fully AI-generated tracks a day, roughly 39% of all music delivered to the platform daily. Six of Spotify's top 50 trending songs in the US in late January were fully AI-generated. Suno was generating 7 million songs a day. Suno chief executive Mikey Shulman gave the game away when he said: "It's not really enjoyable to make music now. It takes a lot of time, it takes a lot of practice", reducing musical craft to a friction problem for software to remove. Sam Altman's remark that it takes "20 years of life and all of the food you eat" to "train a human" landed for the same reason, because it exposed a worldview in which human creativity and ecological limits are treated less as values than as inefficiencies.
The biggest AI companies have not just disrupted creative industries, they have been repeatedly accused in court of building their products on unlicensed human work, with lawsuits from authors and visual artists, from major news organisations including The New York Times, and from Hollywood studios such as Disney and Universal alleging large-scale copyright infringement. Whether every case succeeds or not, the pattern is clear: companies that present themselves as engines of innovation have been credibly accused of treating books, journalism, music and art as raw material to be scraped, absorbed and monetised without consent, compensation or democratic accountability.
The same systems are also corroding the information environment. Research from Proof News found that leading AI tools gave inaccurate, harmful or incomplete answers to basic election questions more than half the time, while a separate GroundTruthAI analysis reported by NBC found that popular chatbots answered election queries incorrectly 27% of the time.
January 2021: Pro Trump rally in Washington DC.Grok on X has already shown how this can play out in practice. Election officials traced false claims about ballot deadlines and candidate eligibility back to Grok during the 2024 US race, and later warned that such errors could mislead or confuse voters at scale. With more high-stakes elections approaching, that is not a marginal bug. It is a democratic risk amplified by billionaire-owned platforms, automated recommendation systems and synthetic content designed for maximum engagement rather than truth.
A different future is possible.
Technology for the common good would mean a society where digital tools are built first to meet real social and ecological needs, not to deepen billionaire control or chase speculative profit, and where AI is not treated as an automatic solution but used only when it is appropriate, justified and not more resource-intensive than simpler alternatives.
It would run on 100% additional renewable energy, disclose its full energy, water and supply-chain footprint, and be designed so communities are not left paying the price through higher bills, water stress or pollution.
April 2025: At nightfall in California, Greenpeace USA projected a powerful message of purpose and defiance onto the Marin Headlands, facing the Golden Gate Bridge. The action marked 100 days into the Trump administration's second term.Ownership and governance would be far more democratic, with strong public rules, limits on monopoly power, meaningful community consent, and institutions able to steer technology towards climate resilience, public services, biodiversity protection and other shared needs. It would also mean building forms of sovereign AI, where data and models are not simply extracted into distant corporate clouds but remain subject to local democratic control, clear auditability, strict privacy safeguards and public-interest rules. Access would be broad, affordable and accessible by design, and the freedoms it protects would include privacy, freedom of expression, the right to dissent, and protection from surveillance, manipulation and exclusion, so that technology expands people's power instead of shrinking it.
Take action to fight the billionaire takeover and corporate intimidation.
Take action