To trust or not to trust a human(-like) AI—A scoping review and conjoint analyses on factors influencing anthropomorphism and trust
Allgemeines
Art der Publikation: Journal Article
Veröffentlicht auf / in: Zeitschrift für Arbeitswissenschaft
Jahr: 2025
Band / Volume: 79
Seiten: 402-432
Verlag (Publisher): Springer
DOI: https://doi.org/10.1007/s41449-025-00481-6
Autoren
Reuter Muriel
Kirchhoff Britta Marleen
Radüntz Thea
Peifer Corinna
Zusammenfassung
AI systems are becoming increasingly complex and human-like, and we interact with them more and more frequently. How does perceived human-likeness affect trust in AI systems? And what makes AI systems appear human in the first place? In a scoping review, we first examined the relationship between anthropomorphism and trust, although the operationalisation of anthropomorphism was very inconsistent. To address this gap, two conjoint analyses were conducted online focusing on four anthropomorphic characteristics identified in the review: name, appearance, voice, and communication style. The studies found that voice and communication style significantly influenced perceptions of human-likeness, while voice had a slightly stronger effect on trustworthiness. Overall, more human-like systems were perceived as more trustworthy across all attributes.
Practical Relevance: The findings highlight the need for a comprehensive, integrated approach to AI design that considers how design elements shape user perceptions and trust. Importantly, the context in which AI is used, particularly in the workplace, must always be considered.
Practical Relevance: The findings highlight the need for a comprehensive, integrated approach to AI design that considers how design elements shape user perceptions and trust. Importantly, the context in which AI is used, particularly in the workplace, must always be considered.