DEVELOPING COMBAT BEHAVIOR THROUGH REINFORCEMENT LEARNING

Loading...
Thumbnail Image
Authors
Boron, Jonathan A.
Subjects
artificial intelligence
autonomous planning
neural networks
reinforcement learning
machine learning
combat model
wargaming
modeling and simulation
Advisors
Darken, Christian J.
Date of Issue
2020-06
Date
Publisher
Monterey, CA; Naval Postgraduate School
Language
Abstract
The application of reinforcement learning in recent academic and commercial research projects has produced robust systems capable of performing at or above human performance levels. The objective of this thesis was to determine whether agents trained through reinforcement learning were capable of achieving optimal performance in small combat scenarios. In a set of computational experiments, training was conducted in a simple aggregate-level, constructive simulation capable of implementing both deterministic and stochastic combat models, and neural network performance was validated against the tactical principles of mass and economy of force. Overall, neural networks were able to learn the ideal behaviors, with the combat model and reinforcement learning algorithm having the most significant impact on performance. Moreover, in scenarios where massing was the best tactic, the training duration and learning rate were determined to be the most important training hyper-parameters. However, when economy of force was ideal, the discount factor was the only hyper-parameter that had a significant effect. In summary, this thesis concluded that reinforcement learning offers a promising means to develop intelligent behavior in combat simulations, which could be applied to training or analytical domains. It is recommended that future research examine larger and more complex training scenarios in order to fully understand the capabilities and limitations of reinforcement learning.
Type
Thesis
Description
Department
Computer Science (CS)
Organization
Identifiers
NPS Report Number
Sponsors
Funder
Format
Citation
Distribution Statement
Approved for public release. distribution is unlimited
Rights
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. Copyright protection is not available for this work in the United States.
Collections