The Effect of Perceived Benevolence on Trust in Automation [video]
MetadataShow full item record
As autonomy becomes more ubiquitous, human-machine teams are becoming more common and may very well be a staple of future military and civilian operations. In many ways, human-machine teaming will look much like human-human teaming, while in some ways it will look very different. One vital aspect of teaming that includes both similarities and differences as mentioned is trust, and a person�s reliance on teammates�whether human or autonomous machine�based on that level of trust. Inter-human trust on teams is typically damaged through actions that fall into one of two broad categories: a competency-based error, where someone breaches trust due to lack of knowledge or miscalculation; or an integrity-based violation, where someone purposefully chooses to breach trust in order to gain some other objective. Although human-automation trust with regard to competency-based errors has been studied, searches turn up little in the way of trust research with automation committing integrity-based violations. Until recently, the assumption that automation was not capable of integrity-based violations seemed obvious and sound. Yet with progressing developments in artificial intelligence and ever-increasing skillsets of cyber hackers, that assumption may need a second look. If one accepts that automation of the future may be capable of purposefully breaching trust (or that cyber hackers will be capable of creating the same effect), the trust response of humans in receipt of such a breach should be studied. Using reliance measures as an indicator of trust, this research attempts to map the time-response of human trust in automation after experiencing a competency-based error versus an integrity-based violation on the part of the automation.
CRUSER TechCon 2018 Research at NPS. Wednesday 2: Teaming
RightsThis publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. Copyright protection is not available for this work in the United States.
Showing items related by title, author, creator and subject.
Clark, Tiffany (2018-04-18);As autonomy becomes more ubiquitous, human-machine teams are becoming more common and may very well be a staple of future military and civilian operations. In many ways, human-machine teaming will look much like human-human ...
Clark, Tiffany (Monterey, CA; Naval Postgraduate School, 2018-06);Successful human-machine teaming requires humans to trust machines. While many claim to welcome automation, there is also mistrust of machines, which may stem from more than competence concerns. Human-automation trust ...
Irvine, Cynthia; Levin, Timothy (Monterey, California. Naval Postgraduate School, 2000-12); NPS-CS-00-008We discuss a class of computer/network architectures that supports multilevel security and commercial applications, while utilizing primarily commercial-off-the-shelf (COTS) workstations, operating systems and hardware ...