Interdependence Analysis for Artificial Intelligence System Safety

Loading...
Thumbnail Image
Authors
Nagy, Bruce
Miller, Scot
Subjects
Advisors
Date of Issue
2021-05-10
Date
05/10/21
Publisher
Monterey, California. Naval Postgraduate School
Language
Abstract
Engineers responsible for evaluating tactical and weapons systems for system safety will need a new approach for evaluating emerging artificial intelligence (AI)-enabled systems, since these systems leverage machine learning (ML) techniques. For many reasons, ML algorithms are often difficult to diagnose for safety purposes. For instance, they did not lend themselves easily to codebase inspections, thus necessitating the reduction in "autonomy" of the ML-enabled component. By modifying Interdependence Analysis (IA) techniques, a more rigorous approach to evaluating AI/ML-enabled weapons can be found. The IA process produces a rigorous exploration based on observability, predictability, and direct ability, highlighting the key requirements that encapsulate all interactions between human and machine. This paper explores using IA to define the interaction requirements for human–machine teaming, employs those results to identify key critical functions, and leverages those findings to reveal how "autonomy" reduction might be employed.
Type
Presentation
Description
Department
Identifiers
NPS Report Number
SYM-AM-21-058
Sponsors
Prepared for the Naval Postgraduate School, Monterey, CA 93943.
Naval Postgraduate School
Funder
Format
Citation
Distribution Statement
Approved for public release; distribution is unlimited.
Approved for public release; distribution is unlimited.
Rights
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. Copyright protection is not available for this work in the United States.
Collections