no code implementations • 30 May 2023 • Camila Kolling, Till Speicher, Vedant Nanda, Mariya Toneva, Krishna P. Gummadi
Concretely, we show how PNKA can be leveraged to develop a deeper understanding of (a) the input examples that are likely to be misclassified, (b) the concepts encoded by (individual) neurons in a layer, and (c) the effects of fairness interventions on learned representations.
1 code implementation • 23 Jun 2022 • Vedant Nanda, Till Speicher, Camila Kolling, John P. Dickerson, Krishna P. Gummadi, Adrian Weller
Our work offers a new view on robustness by using another reference NN to define the set of perturbations a given NN should be invariant to, thus generalizing the reliance on a reference ``human NN'' to any NN.
no code implementations • 13 Apr 2022 • Camila Kolling, Victor Araujo, Adriano Veloso, Soraia Raupp Musse
Hence, in this work, we introduce a novel learning method that combines both subjective human-based labels and objective annotations based on mathematical definitions of facial traits.
1 code implementation • 29 Nov 2021 • Vedant Nanda, Ayan Majumdar, Camila Kolling, John P. Dickerson, Krishna P. Gummadi, Bradley C. Love, Adrian Weller
One necessary criterion for a network's invariances to align with human perception is for its IRIs look 'similar' to humans.
no code implementations • 12 Feb 2020 • Camila Kolling, Jônatas Wehrmann, Rodrigo C. Barros
Our major contribution is to identify core components for training VQA models so as to maximize their predictive performance.