Search Results for author: Ruth Fong

Found 12 papers, 6 papers with code

Debiasing Convolutional Neural Networks via Meta Orthogonalization

1 code implementation15 Nov 2020 Kurtis Evan David, Qiang Liu, Ruth Fong

While deep learning models often achieve strong task performance, their successes are hampered by their inability to disentangle spurious correlations from causative factors, such as when they use protected attributes (e. g., race, gender, etc.)

Word Embeddings

Contextual Semantic Interpretability

1 code implementation18 Sep 2020 Diego Marcos, Ruth Fong, Sylvain Lobry, Remi Flamary, Nicolas Courty, Devis Tuia

Once the attributes are learned, they can be re-combined to reach the final decision and provide both an accurate prediction and an explicit reasoning behind the CNN decision.

There and Back Again: Revisiting Backpropagation Saliency Methods

1 code implementation CVPR 2020 Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Andrea Vedaldi

Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample.


Occlusions for Effective Data Augmentation in Image Classification

no code implementations23 Oct 2019 Ruth Fong, Andrea Vedaldi

Deep networks for visual recognition are known to leverage "easy to recognise" portions of objects such as faces and distinctive texture patterns.

Classification Data Augmentation +2

NormGrad: Finding the Pixels that Matter for Training

no code implementations19 Oct 2019 Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Hakan Bilen, Andrea Vedaldi

In this paper, we are rather interested by the locations of an image that contribute to the model's training.


Understanding Deep Networks via Extremal Perturbations and Smooth Masks

1 code implementation ICCV 2019 Ruth Fong, Mandela Patrick, Andrea Vedaldi

In this paper, we discuss some of the shortcomings of existing approaches to perturbation analysis and address them by introducing the concept of extremal perturbations, which are theoretically grounded and interpretable.

Interpretable Machine Learning

Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks

no code implementations CVPR 2018 Ruth Fong, Andrea Vedaldi

By studying such embeddings, we are able to show that 1., in most cases, multiple filters are required to code for a concept, that 2., often filters are not concept specific and help encode multiple concepts, and that 3., compared to single filter activations, filter embeddings are able to better characterize the meaning of a representation and its relationship to other concepts.

Interpretable Explanations of Black Boxes by Meaningful Perturbation

5 code implementations ICCV 2017 Ruth Fong, Andrea Vedaldi

As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions.

Interpretable Machine Learning

Using Human Brain Activity to Guide Machine Learning

no code implementations16 Mar 2017 Ruth Fong, Walter Scheirer, David Cox

The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.

Object Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.