Update on Machine-Learned Correctness Properties
Loading...
Authors
Michael, James
Drusinsky, Doron
Litton, Matthew
Subjects
assurance
machine learning
runtime verification
formal methods
cross entropy
autonomous systems
machine learning
runtime verification
formal methods
cross entropy
autonomous systems
Advisors
Date of Issue
2023-01
Date
January 2023
Publisher
Monterey, California. Naval Postgraduate School
Language
Abstract
This report details a novel method which has the potential for improving the U.S. Navy’s ability to perform continuous assurance on autonomous and
other cyberphysical systems. Specifically, this report presents a novel technique for simulation-driven data generation of explainable machine-learned
correctness properties, called ML-assertions, for the purpose of subsequent runtime verification. The method brings the task of providing formal
guarantees about the dependability of autonomous systems from the realm of doctoral-level experts into the domain of system developers and
engineers. Preliminary experimentation demonstrates that ML-assertions can be utilized for behavior prediction in complex multi-agent systems,
serving as a state-of-the-art method for conducting verification and validation on autonomous cyberphysical systems.
Type
Technical Report
Description
Prepared for: Naval Information Warfare Systems Command (NAVWAR)
Series/Report No
Department
Computer Science (CS)
Organization
Naval Postgraduate School
Identifiers
NPS Report Number
NPS-CS-23-001
Sponsors
Naval Information Warfare Systems Command (NAVWAR) 4301 Pacific Hwy., Bldg. OT7 San Diego, CA 92110-3127
Funder
Naval Information Warfare Systems Command (NAVWAR)
Format
77 p.
Citation
Distribution Statement
Approved for public release; distribution is unlimited.
Rights
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. Copyright protection is not available for this work in the United States.