1 code implementation • 19 Oct 2021 • Sai Sathiesh Rajan, Sakshi Udeshi, Sudipta Chattopadhyay
AequeVox simulates different environments to assess the effectiveness of ASR systems for different populations.
1 code implementation • 6 Oct 2020 • Ezekiel Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay
We propose a grammar-based fairness testing approach (called ASTRAEA) which leverages context-free grammars to generate discriminatory inputs that reveal fairness violations in software systems.
1 code implementation • 25 Feb 2020 • Ezekiel Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay
However, the behaviour of such optimisation has not been studied in the light of a fundamentally different class of attacks called backdoors.
no code implementations • 11 Dec 2019 • Sakshi Udeshi, Xingbin Jiang, Sudipta Chattopadhyay
We conduct and present an extensive user study to validate the results of CALLISTO on identifying low quality data from four state-of-the-art real world datasets.
2 code implementations • 6 Aug 2019 • Sakshi Udeshi, Shanshan Peng, Gerald Woo, Lionell Loh, Louth Rawshan, Sudipta Chattopadhyay
In this work, we present NEO, a model agnostic framework to detect and mitigate such backdoor attacks in image classification ML models.
1 code implementation • 26 Feb 2019 • Sakshi Udeshi, Sudipta Chattopadhyay
The massive progress of machine learning has seen its application over a variety of domains in the past decade.
1 code implementation • 2 Jul 2018 • Sakshi Udeshi, Pryanshu Arora, Sudipta Chattopadhyay
We show that AEQUITAS effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of the respective models using the generated test inputs.