Search Results for author: Sudipta Chattopadhyay

Found 13 papers, 8 papers with code

Repairing Adversarial Texts through Perturbation

no code implementations29 Dec 2021 Guoliang Dong, Jingyi Wang, Jun Sun, Sudipta Chattopadhyay, Xinyu Wang, Ting Dai, Jie Shi, Jin Song Dong

Furthermore, such attacks are impossible to eliminate, i. e., the adversarial perturbation is still possible after applying mitigation methods such as adversarial training.

Adversarial Text Natural Language Processing

AequeVox: Automated Fairness Testing of Speech Recognition Systems

1 code implementation19 Oct 2021 Sai Sathiesh Rajan, Sakshi Udeshi, Sudipta Chattopadhyay

AequeVox simulates different environments to assess the effectiveness of ASR systems for different populations.

Automatic Speech Recognition Fairness +2

Adversarial Attacks and Mitigation for Anomaly Detectors of Cyber-Physical Systems

no code implementations22 May 2021 Yifan Jia, Jingyi Wang, Christopher M. Poskitt, Sudipta Chattopadhyay, Jun Sun, Yuqi Chen

The threats faced by cyber-physical systems (CPSs) in critical infrastructure have motivated research into a multitude of attack detection mechanisms, including anomaly detectors based on neural network models.

Adversarial Attack

SCOPE: Secure Compiling of PLCs in Cyber-Physical Systems

no code implementations23 Dec 2020 Eyasu Getahun Chekole, Martin Ochoa, Sudipta Chattopadhyay

We empirically measure the computational overhead caused by our approach on two experimental settings based on real CPS.

Cryptography and Security

Astraea: Grammar-based Fairness Testing

1 code implementation6 Oct 2020 Ezekiel Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay

We propose a grammar-based fairness testing approach (called ASTRAEA) which leverages context-free grammars to generate discriminatory inputs that reveal fairness violations in software systems.

Fairness Natural Language Processing

Exposing Backdoors in Robust Machine Learning Models

1 code implementation25 Feb 2020 Ezekiel Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay

However, the behaviour of such optimisation has not been studied in the light of a fundamentally different class of attacks called backdoors.

BIG-bench Machine Learning

Callisto: Entropy based test generation and data quality assessment for Machine Learning Systems

no code implementations11 Dec 2019 Sakshi Udeshi, Xingbin Jiang, Sudipta Chattopadhyay

We conduct and present an extensive user study to validate the results of CALLISTO on identifying low quality data from four state-of-the-art real world datasets.

BIG-bench Machine Learning

KLEESPECTRE: Detecting Information Leakage through Speculative Cache Attacks via Symbolic Execution

1 code implementation2 Sep 2019 Guanhua Wang, Sudipta Chattopadhyay, Arnab Kumar Biswas, Tulika Mitra, Abhik Roychoudhury

Spectre attacks disclosed in early 2018 expose data leakage scenarios via cache side channels.

Cryptography and Security

Model Agnostic Defence against Backdoor Attacks in Machine Learning

2 code implementations6 Aug 2019 Sakshi Udeshi, Shanshan Peng, Gerald Woo, Lionell Loh, Louth Rawshan, Sudipta Chattopadhyay

In this work, we present NEO, a model agnostic framework to detect and mitigate such backdoor attacks in image classification ML models.

BIG-bench Machine Learning Decision Making +3

Road Context-aware Intrusion Detection System for Autonomous Cars

no code implementations2 Aug 2019 Jingxuan Jiang, Chundong Wang, Sudipta Chattopadhyay, Wei zhang

With such ongoing road context, RAIDS validates corresponding frames observed on the in-vehicle network.

Intrusion Detection

Grammar Based Directed Testing of Machine Learning Systems

1 code implementation26 Feb 2019 Sakshi Udeshi, Sudipta Chattopadhyay

The massive progress of machine learning has seen its application over a variety of domains in the past decade.

BIG-bench Machine Learning Natural Language Processing

oo7: Low-overhead Defense against Spectre Attacks via Program Analysis

2 code implementations16 Jul 2018 Guanhua Wang, Sudipta Chattopadhyay, Ivan Gotovchits, Tulika Mitra, Abhik Roychoudhury

In this paper, we propose oo7, a static analysis approach that can mitigate Spectre attacks by detecting potentially vulnerable code snippets in program binaries and protecting them against the attack by patching them.

Cryptography and Security

Automated Directed Fairness Testing

1 code implementation2 Jul 2018 Sakshi Udeshi, Pryanshu Arora, Sudipta Chattopadhyay

We show that AEQUITAS effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of the respective models using the generated test inputs.

BIG-bench Machine Learning Decision Making +1

Cannot find the paper you are looking for? You can Submit a new open access paper.