DETECTING AND DEFENDING AGAINST DIFFERENT FAMILIES OF ADVERSARIAL EXAMPLE ATTACKS
Loading...
Authors
Kallis, Shaun
Subjects
adversarial examples
adversarial attacks
machine learning image recognition
convolutional neural network
CNN
deep neural network
DNN
SaliencyMix
3-Mix
N-Mix
PadNet
NOTA
Adversarial Robustness Toolbox
ART
mixup
adversarial mixup
data augmentation
adversarial attacks
machine learning image recognition
convolutional neural network
CNN
deep neural network
DNN
SaliencyMix
3-Mix
N-Mix
PadNet
NOTA
Adversarial Robustness Toolbox
ART
mixup
adversarial mixup
data augmentation
Advisors
Barton, Armon C.
Date of Issue
2023-06
Date
Publisher
Monterey, CA; Naval Postgraduate School
Language
Abstract
Adversarial example attacks alter an image so the image appears largely unaltered to human eyes, but image-recognition models will misclassify it. This is a common type of attack, against which there is currently no good general defense. Most state-of-the-art methods of detecting adversarial example attacks only consistently succeed in recognizing a few known attacks. These defenses do not generalize well to detecting other attacks, which means an adversary only needs to change their attack to leave us without robust abilities to detect attacks. Military intelligence increasingly relies on machine learning image recognition for analyzing satellite images. Finding defenses against these adversarial example attacks is important for ensuring our intelligence-gathering capabilities are not compromised. This thesis seeks to contribute models which will push the state of the art towards successful recognition of adversarial attacks regardless of which type of attack was used. Models we named 3-Mix were trained using combinations of different attacked images; other models were trained using SaliencyMix. These defenses were evaluated against ten attacks: PGD, auto-PGD, autoattack, square, Carlini L2 and L-inf, deepfool, elasticnet, JSMA, and boundary. On average the attack success rate against the best defense model was 0.12 for 3-Mix, 0.31 for SaliencyMix, and 0.77 for comparison model Mixup.
Type
Thesis
Description
Series/Report No
Department
Computer Science (CS)
Organization
Identifiers
NPS Report Number
Sponsors
Funder
Format
Citation
Distribution Statement
Copyright is reserved by the copyright owner.
Rights
This publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. Copyright protection is not available for this work in the United States.