To content

How much trust does artificial intelligence deserve? New study provides answers

© tippapatt​​​/​​adobe.stock.com
A Bochum-Dortmund research team defines six criteria for trustworthy AI - a milestone for the Research Center Trust.

How can you tell whether you can trust an artificial intelligence system? Researchers from TU Dortmund University and Ruhr-Universität Bochum have jointly investigated this question - and developed a list of criteria that can be used to systematically describe the trustworthiness of AI systems. The concept was published in the international journal Topoi.

The interdisciplinary team led by Prof. Dr. Emmanuel Müller (TU Dortmund University), Dr. Carina Newen (TU Dortmund University) and Prof. Dr. Albert Newen (Ruhr University Bochum) proposes six dimensions that can be used to evaluate trust: objective functionality, transparency, uncertainty of underlying data and models, embodiment, immediacy and commitment to the trusting party.

"Our criteria reveal where current AI systems such as ChatGPT or autonomous vehicles still have some catching up to do," explains the research team. Transparency and data security in particular pose major challenges: Deep learning methods deliver impressive results - but often remain difficult to understand. At the same time, biases in training data can distort decisions.

From a philosophical perspective, the researchers also emphasize the need for a critical approach to AI systems: They can be valuable information tools - but trust should always remain reflective.

The study was conducted as part of the Ruhr Innovation Lab, the cooperation between Ruhr-Universität Bochum and TU Dortmund University, with which the two universities are applying for funding within the German government's Excellence Strategy. The aim is to lay the foundations for a sustainable digital society - in which technical performance is just as important as ethical responsibility.