Universität Paderborn

03/24/2026 | Press release | Distributed by Public on 03/24/2026 08:37

How ar­ti­fi­cial in­tel­li­gence does not be­come a cli­mate driver: In­ter­na­tion­al light­house pro­ject shows solu­tions

Paderborn University leads the development of smart chips for greater energy efficiency

Artificial intelligence (AI) is already firmly integrated into the everyday lives of many people. It navigates, calculates, explains and translates - and consumes an enormous amount of energy in the process.CO2 emissions are also very high. This is because AI models have to process huge amounts of data and information, which requires high computing power and powerful graphics processing units (GPUs) or central processing units (CPUs). In the AI lighthouse project "eki"[1] a research team led by Paderborn University has been working on improving the energy efficiency of AI systems and has developed methods that can reduce the energy consumption of AI by up to 90 per cent. Special computer chips are used instead of GPUs and CPUs. The Federal Ministry for the Environment, Climate Action, Nature Conversation and Nuclear Safety has funded the project with around 1.5 million euros over a period of three years.

Optimising the energy efficiency of AI through FGPAs

Deep neural networks (DNNs) are an elementary component of AI and are trained in a complex process with very large amounts of data. This is why they are responsible for an increasing proportion of the computing load and therefore for energy consumption and CO2 emissions in data centres. Prof. Dr. Marco Platzner from the Institute of Computer Science at Paderborn University led the "eki" project and explains: "Deep neural networks are a type of AI that works on the principle of the human brain. The 'deep' part refers to the fact that the networks have many layers that process data and recognise patterns, analyse images and process language." After the DNNs have been trained with huge amounts of data, the models resulting from the process are used. As a rule, GPUs or CPUs are used for this, but their energy efficiency is low. The project team has hence developed a solution: With the help of freely programmable chips, so-called "field-programmable gate arrays" (FPGAs), the energy efficiency of AI systems for DNN calculation can be optimised.

Measure - Automate - Optimise: Hurdles and successes of the project

For comparison: conventional processors execute fixed instruction sets, while the circuitry of FGPAs can be customised. This creates a kind of customised hardware. The advantage: depending on the application, the chips consume less energy and calculate faster than graphics processors. The disadvantage: they are more complex to programme. But even this hurdle has been overcome. This is because researchers from the Department of Computer Science have been working on energy-efficient computing using FPGAs together with the Paderborn Centre for Parallel Computing (PC2) at Paderborn University for a long time. "The company AMD/Xilinx had already developed the open source programme FINN for neural networks on FPGAs. In close collaboration, we were able to contribute our experience to make FINN even better and focus on energy efficiency," explains Prof. Platzner.

In order to reduce energy requirements, the scientists simplified the AI models by removing unnecessary connections within the AI and ensuring that complex functions run efficiently. DNNs were also distributed across several FGPAs. Another focus was on developing reliable methods for predicting the energy requirements of individual components. The researchers succeeded in doing this by extending FINN. They were also able to measure the consumption of complete inference runs and compare them with other technologies. An inference run is the moment when an AI model applies its knowledge to react to new data. "It is particularly pleasing that we were able to achieve an increase in energy efficiency of up to ten times compared to the use of graphics processors. This not only reduces power consumption, but also - depending on the electricity mix - CO2 emissions. As the use of AI continues to grow, the energy requirements of DNNs will become an important environmental factor in the future," summarises Prof. Platzner.

The code that the scientists have developed is openly available in FINN. Paderborn University's PC2 also offers workshops to introduce interested parties to the use of the methods for mapping DNNs on FGPA systems and for analysing energy.

In addition to Paderborn University, the Hamm-Lippstadt University of Applied Sciences, the South Westphalia University of Applied Sciences, the HPC company MEGWARE (Chemnitz) and the AMD Research Labs in Ireland were also involved in the project. Further information can be found on the project website.

This text was translated automatically.

[1] Full project name: "Energy-efficient AI in the data center by approximating DNNs for FPGAs"

Universität Paderborn published this content on March 24, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 24, 2026 at 14:37 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]