Understanding successful human–AI teaming: The role of goal alignment and AI autonomy for social perception of LLM-based chatbots

Allgemeines

Art der Publikation: Journal Article

Veröffentlicht auf / in: Computers in Human Behavior: Artificial Humas

Jahr: 2026

Band / Volume: 7

Verlag (Publisher): Elsevier

DOI: https://doi.org/10.1016/j.chbah.2025.100246

Autoren

Christiane Attig

Luisa Winzer

Tim Schrills

Mourad Zoubir

Maged Mortaga

Wollstadt Patricia

Wiebel-Herboth Christiane

Thomas Franke

Zusammenfassung

LLM-based chatbots such as ChatGPT support collaborative, complex tasks by leveraging natural language processing to provide skills, knowledge, or resources beyond the user’s immediate capabilities. Joint activity theory suggests that effective human–AI collaboration, however, requires more than responding to verbatim prompts – it depends on aligning with the user’s underlying goal. Since prompts may not always explicitly state the goal, an effective LLM should analyze the input to approximate the intended objective before autonomously tailoring its response to align with the user’s goal. To test these assumptions, we examined the effects of LLM-based chatbots’ autonomy and goal alignment on multiple social perception metrics as key criteria for successful human–AI teaming (i.e., perceived cooperation, warmth, competence, traceability, usefulness, and trustworthiness). We conducted a scenario-based online experiment where participants ( N = 182, within-subjects design) were instructed to collaborate with four different versions of an LLM-based chatbot. The overall goal of the study scenario was to detect and correct erroneous information in short encyclopedic articles, representing a prototypical knowledge work task. Four custom-instructed chatbots were provided in random order: three chatbots varying in goal alignment and AI autonomy and one chatbot serving as a control condition not fulfilling user prompts. Repeated-measures ANOVAs demonstrate that a chatbot which is able to excel in goal alignment by autonomously going beyond verbatim user prompts is perceived as superior compared to a chatbot that adheres rigidly to user prompts without adapting to implicit objectives and chatbots that fail to meet the explicit or implicit user goal. These results support the notion that AI autonomy is only perceived as beneficial as long as user goals are not undermined by the chatbot, emphasizing the importance of balancing user and AI autonomy in human-centered design of AI systems.

Zitation kopiert