Search Results for author: Prithviraj Dasgupta

Found 6 papers, 0 papers with code

Reward Shaping for Improved Learning in Real-time Strategy Game Play

no code implementations27 Nov 2023 John Kliem, Prithviraj Dasgupta

We investigate the effect of reward shaping in improving the performance of reinforcement learning in the context of the real-time strategy, capture-the-flag game.

Divide and Repair: Using Options to Improve Performance of Imitation Learning Against Adversarial Demonstrations

no code implementations7 Jun 2023 Prithviraj Dasgupta

We propose a novel technique that can identify parts of demonstrated trajectories that have not been significantly modified by the adversary and utilize them for learning, using temporally extended policies or options.

Imitation Learning

Synthetically Generating Human-like Data for Sequential Decision Making Tasks via Reward-Shaped Imitation Learning

no code implementations14 Apr 2023 Bryan Brandt, Prithviraj Dasgupta

We propose a novel algorithm that can generate synthetic, human-like, decision making data while starting from a very small set of decision making data collected from humans.

Imitation Learning Synthetic Data Generation

A Comparison of State-of-the-Art Techniques for Generating Adversarial Malware Binaries

no code implementations22 Nov 2021 Prithviraj Dasgupta, Zachariah Osman

We consider the problem of generating adversarial malware by a cyber-attacker where the attacker's task is to strategically modify certain bytes within existing binary malware files, so that the modified files are able to evade a malware detector such as machine learning-based malware classifier.

BIG-bench Machine Learning

Playing to Learn Better: Repeated Games for Adversarial Learning with Multiple Classifiers

no code implementations10 Feb 2020 Prithviraj Dasgupta, Joseph B. Collins, Michael McCarrick

The objective of the adversary is to evade the learner's prediction mechanism by sending adversarial queries that result in erroneous class prediction by the learner, while the learner's objective is to reduce the incorrect prediction of these adversarial queries without degrading the prediction quality of clean queries.

Adversarial Text

A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks

no code implementations4 Dec 2019 Prithviraj Dasgupta, Joseph B. Collins

A critical vulnerability of these algorithms is that they are susceptible to adversarial attacks where a malicious entity called an adversary deliberately alters the training data to misguide the learning algorithm into making classification errors.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.