Computer Science News

Developing explanations together: New Collaborative Research Centre on Artificial Intelligence at the Universities of Paderborn and Bielefeld

 |  EIM-NachrichtenCS-Nachrichten

The German Research Foundation (DFG) today announced the establishment of a new Collaborative Research Centre/Transregio (TRR) on the topic of "Explainability of Artificial Intelligence (AI)" at the Universities of Paderborn and Bielefeld. Over the next four years, it will provide around 14 million euros in funding for this. The strongly interdisciplinary research programme entitled "Constructing Explainability" goes beyond the question of explainability of AI as the basis of algorithmic decision-making. The approach promotes the active participation of people in socio-technical systems. The aim is to improve human-machine interaction, to focus on the understanding of algorithms and to investigate this as a product of a multimodal explanatory process. The four-year funding starts on 1 July.

Artificial intelligence is now an integral part of modern life - it sorts out job applications, reviews X-rays and suggests new song lists. Algorithmic decision-making is the basis for such processes. "Citizens have a right to have algorithmic decisions made transparent. The goal of making algorithms accessible is at the core of so-called eXplainable Artificial Intelligence (XAI), which focuses on transparency, interpretability and explainability as desired outcomes," says Prof. Dr. Katharina Rohlfing, spokesperson of the new Collaborative Research Centre. "The problem is that in our digital society, algorithmic approaches such as machine learning are rapidly increasing in complexity. Opacity is a serious problem in all contexts, but especially when people have to make decisions on this opaque basis," adds Prof. Dr. Philipp Cimiano as deputy spokesperson. Especially when it comes to predictions in the field of medicine or jurisprudence, it is necessary to understand machine-controlled decision-making, Cimiano continues. Although there are already approaches that focus on the explainability of corresponding systems, the explanations that emerge presuppose knowledge about how AI works. What is missing, according to Cimiano and Rohlfing, are concepts for the co-construction of explanations, in which the addressees - i.e. the humans - are more strongly involved in the AI-controlled explanation process.

Rohlfing explains: "In our approach, we assume that explanations are only comprehensible for the users if they are not only created for them, but also with them. In explanations among people, this is ensured by the exchange between the participants, who can ask questions and express incomprehension." In an interdisciplinary team, linguists, psychologists, media researchers, sociologists, economists and computer scientists are working closely together to explore the principles and mechanisms of explanation and understanding as social practices and how these can be implemented in AI systems. Furthermore, the team explores how the co-construction of explanations in the interplay between humans and machines establishes new social practices and how these impact society.

The approach is intended to provide new answers to social challenges related to artificial intelligence. At its core, it is about human participation in socio-technical systems, which also promotes users' information sovereignty. "Our goal is to create new forms of communication with AI systems that can really be explained and understood, thus enabling new forms of assistance," Rohlfing summarises.

(Photo: University Paderborn) Prof. Dr. Katharina Rohlfing from the Paderborn University is the spokesperson of the new Collaborative Research Center.
(Photo: Bielefeld University, Mike-Dennis Müller) Prof. Dr. Philipp Cimiano from Bielefeld University is deputy spokesperson of the new Collaborative Research Center.
(Photo: Paderborn University): A new collaborative research center of the Universities of Paderborn and Bielefeld aims at explaining artificial intelligence.

Contact