AequeVox simulates different environments to assess the effectiveness of ASR systems for different populations.
We propose a grammar-based fairness testing approach (called ASTRAEA) which leverages context-free grammars to generate discriminatory inputs that reveal fairness violations in software systems.
However, the behaviour of such optimisation has not been studied in the light of a fundamentally different class of attacks called backdoors.
We conduct and present an extensive user study to validate the results of CALLISTO on identifying low quality data from four state-of-the-art real world datasets.
In this work, we present NEO, a model agnostic framework to detect and mitigate such backdoor attacks in image classification ML models.
The massive progress of machine learning has seen its application over a variety of domains in the past decade.
We show that AEQUITAS effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of the respective models using the generated test inputs.