Questioning trust in AI research: exploring the influence of trust assessment on dependence in AI-assisted decision-making

Allgemeines

Art der Publikation: Journal Article

Veröffentlicht auf / in: Behaviour & Information Technology

Jahr: 2025

Verlag (Publisher): Taylor & Francis

DOI: https://doi.org/10.1080/0144929X.2025.2553153

Autoren

Tim Schrills

Thomas Franke

Steffen Hoesterey

Eileen Roesler

Zusammenfassung

Trust is considered crucial for effective interaction between humans and artificial intelligence (AI), necessitating valid trust assessment methods. The ‘question-behaviour effect’, however, suggests that by applying a questionnaire subsequent behaviour can be influenced, for example participants’ dependence on AI. The objective of the present research was to examine the effect of trust assessment on reliance in the context of an AI-supported decision-making task. We designed an AI-supported task, requiring participants to decide on patterns in so-called Kandinsky Figures. In a scripted experiment with a 2 × 2 between-subjects design, N = 149 participants’ trust was assessed at different times (before block 1 or block 2) and with different assessment extent (i.e. scale length). Participants’ agreement with AI recommendations and task completion time served as behavioural trust indicators. We found no effect of trust assessment on behaviour and correlations between trust and dependence were notably low. Participants’ dependence matched the instructed reliability level of the AI system and our findings did not suggest the presence of a question-behaviour effect of trust assessment. Overall, while the conduction of trust assessment did not influence dependence, our results question the conceptualisation of trust as a general predictor for dependence, especially in comparison to instructed reliability.

Downloads

Publikation herunterladen

Zitation kopiert