The Effect of Perceived Benevolence on Trust in Automation
Abstract
As autonomy becomes more ubiquitous, human-machine teams are becoming more common and may very well be a staple of future military and civilian operations. In many ways, human-machine teaming will look much like human-human teaming, while in some ways it will look very different. One vital aspect of teaming that includes both similarities and differences as mentioned is trust, and a person�s reliance on teammates�whether human or autonomous machine�based on that level of trust.
Inter-human trust on teams is typically damaged through actions that fall into one of two broad categories: a competency-based error, where someone breaches trust due to lack of knowledge or miscalculation; or an integrity-based violation, where someone purposefully chooses to breach trust in order to gain some other objective. Although human-automation trust with regard to competency-based errors has been studied, searches turn up little in the way of trust research with automation committing integrity-based violations. Until recently, the assumption that automation was not capable of integrity-based violations seemed obvious and sound. Yet with progressing developments in artificial intelligence and ever-increasing skillsets of cyber hackers, that assumption may need a second look.
If one accepts that automation of the future may be capable of purposefully breaching trust (or that cyber hackers will be capable of creating the same effect), the trust response of humans in receipt of such a breach should be studied. Using reliance measures as an indicator of trust, this research attempts to map the time-response of human trust in automation after experiencing a competency-based error versus an integrity-based violation on the part of the automation.
Description
CRUSER TechCon 2018 Research at NPS. Wednesday 2: Teaming
Rights
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. Copyright protection is not available for this work in the United States.Collections
Related items
Showing items related by title, author, creator and subject.
-
The Effect of Perceived Benevolence on Trust in Automation [video]
Clark, Tiffany (2018-04-18);As autonomy becomes more ubiquitous, human-machine teams are becoming more common and may very well be a staple of future military and civilian operations. In many ways, human-machine teaming will look much like human-human ... -
INTEGRITY-BASED TRUST VIOLATIONS WITHIN HUMAN-MACHINE TEAMING
Clark, Tiffany (Monterey, CA; Naval Postgraduate School, 2018-06);Successful human-machine teaming requires humans to trust machines. While many claim to welcome automation, there is also mistrust of machines, which may stem from more than competence concerns. Human-automation trust ... -
THE EFFECT OF INTERMEDIATE TRUST RATINGS ON AUTOMATION RELIANCE
Torner, Linus (Monterey, CA; Naval Postgraduate School, 2023-12);As automated systems are increasingly capable of augmenting human decision-makers, appropriate reliance on automation has the potential to increase safety and efficiency in several high-stake domains. To that end, a solid ...