Search Results for author: Joseph B. Collins

Found 2 papers, 0 papers with code

Playing to Learn Better: Repeated Games for Adversarial Learning with Multiple Classifiers

no code implementations10 Feb 2020 Prithviraj Dasgupta, Joseph B. Collins, Michael McCarrick

The objective of the adversary is to evade the learner's prediction mechanism by sending adversarial queries that result in erroneous class prediction by the learner, while the learner's objective is to reduce the incorrect prediction of these adversarial queries without degrading the prediction quality of clean queries.

Adversarial Text

A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks

no code implementations4 Dec 2019 Prithviraj Dasgupta, Joseph B. Collins

A critical vulnerability of these algorithms is that they are susceptible to adversarial attacks where a malicious entity called an adversary deliberately alters the training data to misguide the learning algorithm into making classification errors.

Cannot find the paper you are looking for? You can Submit a new open access paper.