Explainable Artificial Intelligence: Designing human-centric assessment system interfaces to increase explainability and trustworthiness of artificial intelligence in medical applications

Art der Abschlussarbeit

Status der Arbeit

Betreuer/in

Hintergrundinformationen zu der Arbeit

With the all pervasive use of artificial intelligence in current technological advances the

medical domain is no exception. Performant but complex AI models, such as deep neural

networks, can be used for decision support systems for medical professionals. However,

such AI models are commonly referred to as a "black box" which humans struggle to

understand. This lack of understanding leads to trust and compliance issues, especially

in the medical context, where the consequences can be severe. Combining the HCI with

the XAI domain allows for designing and developing a human-centric AI assessment

system to facilitate the AI model’s understandability and trustworthiness for the user.

As part of this thesis a prototype was conceptualized based on user-centered research

and XAI literature, implemented as a flexible browser-based application and evaluated

with medical students. The results show connections between interactive explanations,

understandability and trustworthiness of AI models. A summative evaluation of the

prototype showed that the user’s subjective understanding of the AI model increased

through the interaction with the system. Furthermore the user’s perceived trustworthiness

of the AI model decreased. From this finding we can conclude that the presented

interactive explanations are suitable for moderating the user’s subjective understanding

and perceived trustworthiness of the AI model. Additionally, guidance in HCI was

observed to reduce the explanation satisfaction for the users surveyed, while having no

significant effects on perceived understandability and trustworthiness of the AI model.

Prüfling
Philipp Dominik Bzdok

Starttermin

Juli 2021

Abgeschlossen

Jan. 2022

Zitation kopiert