Universität Paderborn

09/15/2025 | Press release | Distributed by Public on 09/15/2025 01:25

Un­der­stand­ing as a goal: sci­ent­ists de­vel­op concept for ex­plain­able ar­ti­fi­cial in­tel­li­gence

We now encounter artificial intelligence (AI) almost everywhere - from voice assistants to complex decision-making systems, it makes our everyday lives easier in many fields. However, it is often difficult to understand the decisions made by AI. In the Collaborative Research Centre/Transregio (TRR) 318 "Constructing Explainability" at Paderborn University and Bielefeld University, researchers are therefore working on making explanatory processes more understandable for users. But do all users always need the same information? And how can an AI recognise what information users actually need to understand? The "Understanding" synthesis group of TRR 318 is working on these questions and is one of six overarching groups that bring together key aspects of the research project on an interdisciplinary basis.

"The aim of explanations is - as a rule - for people to understand something in the end," says Prof Dr Heike M. Buhl, Professor of Educational Psychology and Developmental Psychology at Paderborn University. The focus of her research in the synthesis group is on the distinction between two main aspects of understanding as a result of the explanatory process: conceptual knowledge ("knowing that...") and the ability to act ("knowing how..."). As part of its interdisciplinary research, which combines computer science, linguistics, sociology and psychology, the group also differentiates between superficial and deep understanding and examines their dynamics in everyday life.

Sometimes it is enough to know how to use something (e.g. "If you press the switch, the light comes on"). In other cases, a deeper understanding of why something happens is required (e.g. "Pressing the switch closes the circuit and the light comes on"). It is precisely these differences that also play a major role in the use of AI systems. "If an artificial intelligence is used in an everyday situation without any particular relevance, most users only need a superficial understanding," says Prof. Buhl, who is a member of the Synthesis Group's organisational team. In order to categorise this, the AI must not simply output text, but ideally must also be able to interpret the (non-)verbal feedback from users.

The researchers at TRR 318 therefore see explainability as an interactive process: in human-to-human communication, understanding arises gradually, through questions and follow-up explanations. AI systems must therefore also be able to react flexibly and adapt their explanations. "Our results from the synthesis group show that not only the goals of AI users in explanatory situations can be very different, but also the paths to understanding. For example, they depend on the user's prior knowledge or specific interests," explains Prof Buhl.

By focussing on the different needs and prior knowledge of users, TRR 318 paves the way for an AI that is not only powerful but also user-friendly - an AI that understands that explainability is always a question of perspective.

This text was translated automatically.

Universität Paderborn published this content on September 15, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 15, 2025 at 07:25 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]