Two Gaps That Need to be Filled in Order to Trust AI in Complex Battle Scenarios

Loading...
Thumbnail Image
Authors
Nagy, Bruce
Subjects
Advisors
Date of Issue
2022-05-02
Date
2022-05-02
Publisher
Monterey, California. Naval Postgraduate School
Language
Abstract
In human terms, trust is earned. This paper presents an approach on how an AI-based Course of Action (COA) recommendation algorithm (CRA) can earn human trust. It introduces a nine-stage process (NSP) divided into three phases, where the first two phases close two critical logic gaps necessary to build a trustworthy CRA. The final phase involves deployment of a trusted CRA. Historical examples are presented to provide arguments on why trust needs to be earned, beyond explaining its recommendations, especially when battle complexity and opponent surprise actions are being addressed. The paper describes discussions on the effects that surprise actions had on past battles and how AI might have made a difference, but only if the degree of trust was high. To achieve this goal, the NSP introduces modeling constructs called EVEs. EVEs are key in allowing knowledge from varying sources and forms to be collected, integrated, and refined during all three phases. Using EVEs, the CRA can integrate knowledge from wargamers conducting tabletop discussions as well as operational test engineers working with actual technology during product testing. EVEs allow CRAs to be trained with a combination of theory and practice to provide more practical and accurate recommendations.
Type
Conference Paper
Description
Excerpt from the Proceedings of the Nineteenth Annual Acquisition Research Symposium
Department
Identifiers
NPS Report Number
SYM-AM-22-066
Sponsors
Funder
Format
Citation
Distribution Statement
Approved for public release; distribution is unlimited.
Rights
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. Copyright protection is not available for this work in the United States.
Collections