Brandenburgische Technische Universität Cottbus-Senftenberg

04/22/2026 | News release | Distributed by Public on 04/22/2026 01:16

A cartoon vision became research

A cartoon vision became research

22.04.2026

As a child, he was fascinated by a flying brain in a science fiction series. Today, at the BTU, he is researching how machines can recognise human mental states and adapt to them. Between that early fascination and cutting-edge international research, a new chapter in artificial intelligence is unfolding.

When he was four years old, Prof. Thorsten Zander watched a cartoon series that would change his life. In *Captain Future*, one character in particular fascinated him: the 'flying brain' of a deceased scientist, which had been integrated into a robot's body. 'The idea that a brain and a machine could become one has never left me,' he says today.

From the series to science

What seemed like science fiction back then is now at the heart of his scientific work: the connection between human brain signals and artificial intelligence. The professor holds the chair for Neuroadaptive Human-Technology Interaction at the Brandenburg University of Technology Cottbus-Senftenberg. His research goal is as visionary as it is concrete: machines should not only understand our words, but also our mental states - in other words, whether we are surprised, irritated, delighted or overwhelmed. Systems should be able to adapt to this. Neuroadaptively.

When the brain becomes the interface

At the heart of his work are so-called passive brain-computer interfaces (BCI). Unlike invasive procedures, in which electrodes are implanted directly into the brain, these systems rely on non-invasive measurement methods - sensors that detect brain activity from the outside.

But his aim is not to read minds. "We cannot tell whether someone wants to bake a pizza tonight," says Prof. Zander. "But we can determine with a high degree of accuracy whether someone understands something, is surprised or is reacting emotionally."

These mental states are enormously valuable for interacting with machines. For today's AI systems - however powerful they may be - do not experience the world. They process text, but they feel no uncertainty, no joy, no hesitation.

This is where his research comes in: brain signals provide AI with additional information about what a person means.

An example: if someone says "I'm feeding my cat", that is linguistically neutral. But if the measured brain signals show that joy is present, a new level of meaning emerges. The AI understands context, emotion and personal connection - and can react accordingly.

From scepticism to international momentum

When he first presented these ideas around 20 years ago, he was "smiled at kindly", as he puts it.

Today, hundreds of millions of dollars are being poured into similar approaches worldwide. Chinese and US companies are investing heavily in the brain-machine interface - in some cases using invasive methods. Europe, on the other hand, is focusing more on non-invasive technologies - an approach he considers more socially responsible.

A milestone in his own research was the development of so-called universal classifiers - Systems that no longer need to be trained individually for each person, but work immediately with new users. To further develop the technology to market readiness, his company Zander Labs received €30 million in funding from the Federal Agency for Cybersecurity.

Data protection in mind

Perhaps the most important question, however, is: who controls this data?

"I wouldn't send my own brain signals to an unknown company," he says frankly. That is why his team follows a clear principle: the processing of signals should take place locally - on the user's own device. Only interpreted states such as "surprise" or "joy" are transmitted, not the raw data. Transparency and decision-making power remain with the user.

"The technology can only work if people themselves determine what happens to their data."

First European conference with a clear focus on neuroadaptive artificial intelligence at the BTU

The international conference NAT 2026 - Neuroadaptive Technologies will take place from 22 to 24 April 2026. Under the theme "Brain meets AI", researchers, policy-makers and representatives from industry and start-ups will discuss the future of neuroadaptive artificial intelligence. The focus is on technologies that use non-invasively measured brain signals to recognise mental states such as attention, overwhelm or surprise, and adapt AI systems accordingly. The aim is to make human-machine interaction more precise, context-sensitive and human-centred. In addition to scientific questions regarding applications and technological limitations, the conference addresses social and political aspects such as the protection of sensitive brain data and regulatory frameworks. The NAT was launched in 2017 and will take place for the fifth time in 2026.

Contact

Prof. Dr. rer. nat. Thorsten O. Zander
T +49 (0) 355 5818-613
thorsten.zander(at)b-tu.de

Kristin Ebert
Kommunikation und Marketing
T +49 (0) 355 69-2115
kristin.ebert(at)b-tu.de
Inspired by Captain Future: What was once science fiction is now shaping research into neuroadaptive artificial intelligence. (Image: WrongWay - stock.adobe.com)
From the 'flying brain' in *Captain Future* to cutting-edge research: Prof. Dr Thorsten O. Zander is developing systems at the BTU that recognise human states and adapt AI accordingly. (Photo: Ralf Schuster / BTU)
Back
Brandenburgische Technische Universität Cottbus-Senftenberg published this content on April 22, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 22, 2026 at 07:16 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]