Universität Bielefeld

09/09/2025 | News release | Archived content

Developing explanations together

After four years of intensive research, the Transregional Collaborative Research Centre 318 'Constructing Explainability' is taking stock at the end of its first funding phase by the German Research Foundation (DFG). In this interview, the two spokespersons, Professor Dr Katharina Rohlfing and Professor Dr Philipp Cimiano, share key insights: What have they learned about how artificial intelligence (AI) explains things? What were the challenges of bringing together researchers from different fields? And how has technological progress, such as large language models like ChatGPT, changed their work?

What were the most important new findings about explainability in AI?

Katharina Rohlfing: Important to our approach is the assumption that current explainable AI have a fundamental flaw: they treat explanations as a one-way street, that is, the machine explains, and the person listens. Our research contributed to the visibility of the explainee as the addressee of the explanation because in real life, understanding is a bidirectional matter: People talk to each other, ask questions, nod, look confused, or use gestures to show whether they understand something.
That's why we developed a new framework that sees explaining as a two-way process, like a conversation that unfolds in time and is shaped by the participants acting together. We call these systems that are designed according to our framework Social Explainable AI, or sXAI. They adapt their explanations to each individual in real time based on how the person responds or what they consider as relevant.

© TRR 318/ Stefan Sättele

Professor Dr Katharina Rohlfing and Professor Dr Philipp Cimiano form the spokesperson team for the Collaborative Research Centre TRR 318 and can look back on four years of intensive research.

How did you test whether this framework actually works?

Katharina Rohlfing: We closely studied real conversations, for example, how people explain things in everyday situations. What we saw is that even though most explanation start with a monologue, the explainee (person receiving an explanation) is usually actively involved: they ask follow-up questions, look puzzled, or signal their progress in understanding. That means explanation is often a dialogue, not a monologue.
Analyzing the explaining process more closely, we also looked at how people use language and gestures to signal what they understand and what needs to be further pursued. We identified certain patterns that show how people build understanding together. In addition, we analyzed ways of how they "scaffold" each other offering support step by step, like building a temporary structure to help someone climb. For example, one can first instruct how to do something but then move on with explaining what not to do. Negative instructions could be a helpful scaffold.

Philipp Cimiano: Our computational work was concerned with implementing our framework into AI systems. These systems respond to the person they are explaining something to. They consider three key aspects: cooperation, (how well the interaction works), social appropriateness (how suitably the system behaves), and understanding.
The system SNAPE developed in project A01 is a good example. It is sensitive to how a person reacts and then adapts its explanation accordingly. So it doesn't give the same explanation to everyone, but customizes it depending on the situation.

© TRR 318/ Stefan Sättele

Professor Philipp Cimiano at the third TRR conference 'Conzeptualising Explanations', which took place at the beginning of the year at the Bielefeld Centre for Interdisciplinary Research (ZiF). Cimiano heads the projects B01, C05 and INF in Transregio 318.

Did you develop new methods to study explanation more effectively?

Philipp Cimiano: Yes, we found new ways to better examine how explanations work. For instance, developed new instruments to measure whether someone understood something from an explanation or was left confused.
And we didn't just limited our investigations to the lab environment. It was important to us to see how explainability plays out in everyday life, with different people in different situations. For example, we asked, what kind of AI systems use on their daily basis and whether they would like to have an explanation for their functions.

Katharina Rohlfing: Our goal was to investigate how understanding develops - not just whether it happens. That's why we introduced a method where participants look back and describe their "aha moments" - those key turning points in their understanding. They put these moments into spotlight of our analysis.
Another method was to design special workshops where people and AI work together to create explanations. These new methods are helping us to gain deeper insights into not only the process of explaining and understanding but also how to foster it and when explanation is helpful.

© TRR 318/ Stefan Sättele

Professor Dr Katharina Rohlfing is spokesperson for TRR 318 and heads projects A01, A05 and Z.

What was especially challenging during the first funding phase?

Philipp Cimiano: The biggest challenge was bringing together people from very different disciplines like computer science, linguistics or psychology. Everyone has their own way of thinking and speaking. So first, we had to develop a shared language.
Another huge challenge we faced came with the release of ChatGPT. These changed a lot of research concerned with technology development and opened up new possibilities to every user. That's why we quickly formed a working group to focus on the new developments leading to new research projects.

How well did interdisciplinary collaboration work?

Katharina Rohlfing: As a team, I am proud to say that we are strong in interdisciplinarity on many levels. Within individual projects, people from different fields work together. Thus, the projects have a interdisciplinary architecture. But we also work across projects, as it was the case for our first book on social XAI which will be published this year. On top of this, we work in groups pursuing hot topics that we consider relevant to our TRR, such as the one on LLMs.
Likely, regular meetings, like our TRR conferences, writing retreats, and the so-called "Activity Afternoons", also strengthened collaboration. Of course, it's not always easy to integrate new members into this established culture, but we've created formats that ease that process.

What are the major challenges you see for the future?

Philipp Cimiano: Large language models, like ChatGPT, are powerful, but they also have limitations: they often don't take the specific situation into account. They explain things, yes - but they don't really understand who is asking or why. In the future, we'll need systems that can adapt flexibly to the situation, systems that understand what's relevant right now.

Katharina Rohlfing: We need to fundamentally shift how we think about explainability. It's not enough for an output to be understandable. The systems need to create a context in which an interaction can be shaped by the users, so that people don't just passively receive information but actively engage with it to come to a relevant understanding. This strengthens the collaboration between humans and AI and ensures that technology stays understandable but also useful.

Collaborative Research Centre/Transregio 318

In the Collaborative Research Centre/Transregio 318, scientists from the Universities of Bielefeld and Paderborn are researching how artificial intelligence can be made understandable and explainable and how users can be actively involved in the explanation process. Under the title 'Constructing Explainability', researchers from various disciplines are working together in 20 projects and six synthesis groups. The first funding phase will run until the end of the year.

Universität Bielefeld published this content on September 09, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 23, 2025 at 07:41 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]