Search Results for author: Sakshi Udeshi

Found 7 papers, 6 papers with code

Astraea: Grammar-based Fairness Testing

1 code implementation6 Oct 2020 Ezekiel Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay

We propose a grammar-based fairness testing approach (called ASTRAEA) which leverages context-free grammars to generate discriminatory inputs that reveal fairness violations in software systems.

Fairness

Towards Backdoor Attacks and Defense in Robust Machine Learning Models

1 code implementation25 Feb 2020 Ezekiel Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay

Specifically, given a robust Deep Neural Network (DNN) that is trained using PGD-based first-order adversarial training approach, AEGIS uses feature clustering to effectively detect whether such DNNs are backdoor-infected or clean.

BIG-bench Machine Learning Clustering

Callisto: Entropy based test generation and data quality assessment for Machine Learning Systems

no code implementations11 Dec 2019 Sakshi Udeshi, Xingbin Jiang, Sudipta Chattopadhyay

We conduct and present an extensive user study to validate the results of CALLISTO on identifying low quality data from four state-of-the-art real world datasets.

BIG-bench Machine Learning

Model Agnostic Defence against Backdoor Attacks in Machine Learning

2 code implementations6 Aug 2019 Sakshi Udeshi, Shanshan Peng, Gerald Woo, Lionell Loh, Louth Rawshan, Sudipta Chattopadhyay

In this work, we present NEO, a model agnostic framework to detect and mitigate such backdoor attacks in image classification ML models.

BIG-bench Machine Learning Decision Making +3

Grammar Based Directed Testing of Machine Learning Systems

1 code implementation26 Feb 2019 Sakshi Udeshi, Sudipta Chattopadhyay

The massive progress of machine learning has seen its application over a variety of domains in the past decade.

BIG-bench Machine Learning

Automated Directed Fairness Testing

1 code implementation2 Jul 2018 Sakshi Udeshi, Pryanshu Arora, Sudipta Chattopadhyay

We show that AEQUITAS effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of the respective models using the generated test inputs.

BIG-bench Machine Learning Decision Making +1

Cannot find the paper you are looking for? You can Submit a new open access paper.