Color for Characters - Effects of Visual Explanations of AI on Trust and Observability

Allgemeines

Art der Publikation: Conference Paper

Veröffentlicht auf / in: Artificial Intelligence in HCI. HCII 2020. Lecture Notes in Computer Science

Jahr: 2020

Band / Volume: 12217

Seiten: 121-135

Verlag (Publisher): Springer

DOI: https://doi.org/10.1007/978-3-030-50334-5_8

Autoren

Tim Schrills

Thomas Franke

Abstract

The present study investigates the effects of prototypical visualization approaches aimed at increasing the explainability of machine learning systems in regard to perceived trustworthiness and observability. As the amount of processes automated by artificial intelligence (AI) increases, so does the need to investigate users’ perception. Previous research on explainable AI (XAI) tends to focus on technological optimization. The limited amount of empirical user research leaves key questions unanswered, such as which XAI designs actually improve perceived trustworthiness and observability. We assessed three different visual explanation approaches, consisting of either only a table with classification scores used for classification, or, additionally, one of two different backtraced visual explanations. In a within-subjects design with N = 83 we examined the effects on trust and observability in an online experiment. While observability benefitted from visual explanations, information-rich explanations also led to decreased trust. Explanations can support human-AI interaction, but differentiated effects on trust and observability have to be expected. The suitability of different explanatory approaches for individual AI applications should be further examined to ensure a high level of trust and observability in e.g. automated image processing.