Keep talking and nobody decides - How can AI augment users’ ability to detect misinformation while balancing engagement and workload?
Allgemeines
Art der Publikation: Conference Paper
Veröffentlicht auf / in: Joint Proceedings of the ACM IUI Workshops 2025
Jahr: 2025
Autoren
Zusammenfassung
To detect misinformation, users of social networks potentially utilize AI-based decision support systems (DSS).
However, a DSS’s ability to augment user behavior depends on how a DSS modifies users’ decision-making
and interaction experience. We examined how users’ performance and experience are affected by the level of
automation of a DSS in misinformation detection. In a preregistered within-subjects-experiment with an AI, N=99
participants interacted with two DSS in a simulated environment. The first provided distinct recommendations
(higher level of automation), while the second provided solely evaluative support (lower level of automation).
We compared their effect on user behavior (here: accuracy, interaction frequency) and experience (here: trust,
traceability). Participants showed higher accuracy when receiving recommendations but also interacted less
frequently. Trust and perceived traceability did not differ between systems. We discuss whether more intensive
processing of the evaluated information could be responsible for the higher number of errors in the evaluative
system.
However, a DSS’s ability to augment user behavior depends on how a DSS modifies users’ decision-making
and interaction experience. We examined how users’ performance and experience are affected by the level of
automation of a DSS in misinformation detection. In a preregistered within-subjects-experiment with an AI, N=99
participants interacted with two DSS in a simulated environment. The first provided distinct recommendations
(higher level of automation), while the second provided solely evaluative support (lower level of automation).
We compared their effect on user behavior (here: accuracy, interaction frequency) and experience (here: trust,
traceability). Participants showed higher accuracy when receiving recommendations but also interacted less
frequently. Trust and perceived traceability did not differ between systems. We discuss whether more intensive
processing of the evaluated information could be responsible for the higher number of errors in the evaluative
system.
Downloads
Publikation herunterladen
Downloads
Publikation herunterladen