no code implementations • 26 Dec 2022 • Narine Kokhlikyan, Bilal Alsallakh, Fulton Wang, Vivek Miglani, Oliver Aobo Yang, David Adkins
We propose a fairness-aware learning framework that mitigates intersectional subgroup bias associated with protected attributes.
no code implementations • 27 Apr 2022 • David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina
We further propose a preliminary approach, called Method Cards, which aims to increase the transparency and reproducibility of ML systems by providing prescriptive documentation of commonly-used ML methods and techniques.
no code implementations • 19 Apr 2022 • Bilal Alsallakh, Pamela Bhattacharya, Vanessa Feng, Narine Kokhlikyan, Orion Reblitz-Richardson, Rahul Rajan, David Yan
We survey a number of data visualization techniques for analyzing Computer Vision (CV) datasets.
no code implementations • NeurIPS Workshop SVRHM 2021 • Bilal Alsallakh, Vivek Miglani, Narine Kokhlikyan, David Adkins, Orion Reblitz-Richardson
When convolutional layers apply no padding, central pixels have more ways to contribute to the convolution than peripheral pixels.
no code implementations • 8 Jun 2021 • Narine Kokhlikyan, Vivek Miglani, Bilal Alsallakh, Miguel Martin, Orion Reblitz-Richardson
Saliency maps have shown to be both useful and misleading for explaining model predictions especially in the context of images.
1 code implementation • 23 Oct 2020 • Vivek Miglani, Narine Kokhlikyan, Bilal Alsallakh, Miguel Martin, Orion Reblitz-Richardson
We explore these effects and find that gradients in saturated regions of this path, where model output changes minimally, contribute disproportionately to the computed attribution.
1 code implementation • ICLR 2021 • Bilal Alsallakh, Narine Kokhlikyan, Vivek Miglani, Jun Yuan, Orion Reblitz-Richardson
We show how feature maps in convolutional networks are susceptible to spatial bias.
2 code implementations • 16 Sep 2020 • Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, Orion Reblitz-Richardson
The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms, also known as feature, neuron and layer importance algorithms, as well as a set of evaluation metrics for these algorithms.
1 code implementation • 12 Jul 2020 • Bilal Alsallakh, Zhixin Yan, Shabnam Ghaffarzadegan, Zeng Dai, Liu Ren
We propose a measure to compute class similarity in large-scale classification based on prediction scores.
no code implementations • 18 Nov 2017 • Medha Katehara, Emma Beauxis-Aussalet, Bilal Alsallakh
Most multi-class classifiers make their prediction for a test sample by scoring the classes and selecting the one with the highest score.
no code implementations • 17 Oct 2017 • Bilal Alsallakh, Amin Jourabloo, Mao Ye, Xiaoming Liu, Liu Ren
We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data.