DEFENDING AGAINST DEEP LEARNING-BASED VIDEO FINGERPRINTING ATTACKS WITH ADVERSARIAL EXAMPLES

dc.contributor.advisorBarton, Armon C.
dc.contributor.authorHayden, Blake A.
dc.contributor.departmentComputer Science (CS)
dc.contributor.secondreaderKroll, Joshua A.
dc.date.accessioned2022-09-20T20:51:14Z
dc.date.available2022-09-20T20:51:14Z
dc.date.issued2022-06
dc.description.abstractIn an increasingly digital world, online anonymity and privacy is a paramount issue for internet users. Tools like The Onion Router (Tor) offer users anonymous internet browsing. Recently, however, Tor's anonymity has been compromised through fingerprinting, where machine learning models are used to analyze Tor traffic and predict user viewing habits, with some models achieving an accuracy of over 99%. There are defenses for Tor that attempt to prevent fingerprinting, but many are defeated by new techniques that utilize Deep Neural Networks (DNNs). New defenses that are robust against DNNs use adversarial examples to fool the classifier, but those defenses either assume the user has access to the full traffic trace beforehand or require expensive maintenance from Tor servers. In this thesis, we propose Prism, a defense against fingerprinting attacks that uses adversarial examples to fool classifiers in real time. We describe a novel method of adversarial example generation that enables adversarial example creation as input is learned over time. Prism injects these adversarial examples into the Tor traffic stream to prevent DNNs from accurately predicting sites that a user is viewing, even if the DNN is hardened by adversarial training. We show that Prism reduces the accuracy of defended fingerprinting models from over 98% to 0%. We also show that Prism can be implemented entirely on the server side, increasing deployability for users who run Tor on devices without GPUs.en_US
dc.description.distributionstatementApproved for public release. Distribution is unlimited.en_US
dc.description.recognitionOutstanding Thesisen_US
dc.description.serviceEnsign, United States Navyen_US
dc.identifier.curriculumcode368, Computer Science
dc.identifier.thesisid37190
dc.identifier.urihttps://hdl.handle.net/10945/70683
dc.publisherMonterey, CA; Naval Postgraduate Schoolen_US
dc.relation.ispartofseriesNPS Outstanding Theses and Dissertations
dc.rightsThis publication is a work of the U.S. Government as defined in Title 17, United States Code, Section 101. Copyright protection is not available for this work in the United States.en_US
dc.subject.authoradversarial examplesen_US
dc.subject.authordefending Toren_US
dc.subject.authorwebsite fingerprintingen_US
dc.subject.authorvideo fingerprintingen_US
dc.subject.authordeep fingerprintingen_US
dc.subject.authordefending anonymityen_US
dc.subject.authoranonymityen_US
dc.titleDEFENDING AGAINST DEEP LEARNING-BASED VIDEO FINGERPRINTING ATTACKS WITH ADVERSARIAL EXAMPLESen_US
dc.typeThesisen_US
dspace.entity.typePublication
etd.thesisdegree.disciplineComputer Scienceen_US
etd.thesisdegree.grantorNaval Postgraduate Schoolen_US
etd.thesisdegree.levelMastersen_US
etd.thesisdegree.nameMaster of Science in Computer Scienceen_US
relation.isSeriesOfPublicationc5e66392-520c-4aaf-9b4f-370ce82b601f
relation.isSeriesOfPublication.latestForDiscoveryc5e66392-520c-4aaf-9b4f-370ce82b601f
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
22Jun_Hayden_Blake.pdf
Size:
867.7 KB
Format:
Adobe Portable Document Format
Collections