Search Results for author: Partha Pratim Chakrabarti

Found 6 papers, 0 papers with code

Towards Adversarial Purification using Denoising AutoEncoders

no code implementations29 Aug 2022 Dvij Kalaria, Aritra Hazra, Partha Pratim Chakrabarti

Since the accuracy and robustness of deep learning models are primarily attributed from the purity of the training samples, therefore the deep learning architectures are often susceptible to adversarial attacks.

Denoising

Resisting Adversarial Attacks in Deep Neural Networks using Diverse Decision Boundaries

no code implementations18 Aug 2022 Manaar Alam, Shubhajit Datta, Debdeep Mukhopadhyay, Arijit Mondal, Partha Pratim Chakrabarti

The security of deep learning (DL) systems is an extremely important field of study as they are being deployed in several applications due to their ever-improving performance to solve challenging tasks.

Image Classification

Deep Learning-based Spatially Explicit Emulation of an Agent-Based Simulator for Pandemic in a City

no code implementations28 May 2022 Varun Madhavan, Adway Mitra, Partha Pratim Chakrabarti

An alternative is to develop an emulator, a surrogate model that can predict the Agent-Based Simulator's output based on its initial conditions and parameters.

Optimal Multi-Agent Path Finding for Precedence Constrained Planning Tasks

no code implementations8 Feb 2022 Kushal Kedia, Rajat Kumar Jenamani, Aritra Hazra, Partha Pratim Chakrabarti

We consider an extension to this problem, Precedence Constrained Multi-Agent Path Finding (PC-MAPF), wherein agents are assigned a sequence of planning tasks that contain precedence constraints between them.

Multi-Agent Path Finding

PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial Attacks via Pairwise Adversarially Robust Loss Function

no code implementations9 Dec 2021 Manaar Alam, Shubhajit Datta, Debdeep Mukhopadhyay, Arijit Mondal, Partha Pratim Chakrabarti

Ensemble methods against adversarial attacks demonstrate that an adversarial example is less likely to mislead multiple classifiers in an ensemble having diverse decision boundaries.

Image Classification

Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional Variational AutoEncoders for Adversary Detection in the Presence of Noisy Images

no code implementations AAAI Workshop AdvML 2022 Dvij Kalaria, Aritra Hazra, Partha Pratim Chakrabarti

Since the accuracy and robustness of deep learning models are primarily attributed from the purity of the training samples, therefore the deep learning architectures are often susceptible to adversarial attacks.

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.