IMPACT OF LANGUAGE MANIPULATION ON TRUST IN ARTIFICIAL INTELLIGENCE-SUPPORTED DECISION-MAKING IN HUMAN MACHINE TEAMS
Loading...
Authors
Dubois, Jacob W.
Raymond, Trevor O.
Advisors
Canan, Mustafa
Humr, Scott, CDD
Second Readers
Subjects
human-machine teams
trust
artificial intelligence
decision delegation
trust
artificial intelligence
decision delegation
Date of Issue
2024-06
Date
Publisher
Monterey, CA; Naval Postgraduate School
Language
Abstract
Decision support among leaders and within teams is at the precipice of defense when responding to and preventing armed conflict. As technology and use of technologies advance in aiding decision support, understanding dynamics between humans and artificial intelligence (AI) becomes even more critical. Existing research on decision support within human-AI teams identifies trust as a central factor affecting the performance and interactions amongst team members. Existing research indicates humans often encounter challenges in delegating decision-making authority to subordinates, and this hesitancy extends to decision support systems and AI-based solutions, largely due to a lack of trust in these technologies. A previous experiment investigated how time and interim assessments influence trust when an AI system is the sole source of decision support. This thesis expands on that prior experiment by manipulating the language used, to examine how the use of explicit trust language in intermediate decisions on AI-provided advice for decision-making influence trust.
Type
Thesis
Description
Series/Report No
Department
Organization
Identifiers
NPS Report Number
Sponsors
Funding
Format
Citation
Distribution Statement
Distribution Statement A. Approved for public release: Distribution is unlimited.
Rights
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. Copyright protection is not available for this work in the United States.
