Search Results for author: Ruth Fong

Found 21 papers, 10 papers with code

UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

no code implementations27 Mar 2023 Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky

In this work, we propose UFO, a unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations.

Interactive Visual Feature Search

1 code implementation28 Nov 2022 Devon Ulrich, Ruth Fong

Many visualization techniques have been created to explain the behavior of computer vision models, but they largely consist of static diagrams that convey limited information.

Improving Data-Efficient Fossil Segmentation via Model Editing

no code implementations8 Oct 2022 Indu Panigrahi, Ryan Manzuk, Adam Maloof, Ruth Fong

Using a Mask R-CNN to segment ancient reef fossils in rock sample images, we present a two-part paradigm to improve fossil segmentation with few labeled images: we first identify model weaknesses using image perturbations and then mitigate those weaknesses using model editing.

Image Classification Image Segmentation +3

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

no code implementations2 Oct 2022 Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández

Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations.

Explainable Artificial Intelligence (XAI)

Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability

1 code implementation CVPR 2023 Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky

Second, we find that concepts in the probe dataset are often less salient and harder to learn than the classes they claim to explain, calling into question the correctness of the explanations.

Gender Artifacts in Visual Datasets

no code implementations ICCV 2023 Nicole Meister, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models.

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

no code implementations15 Jun 2022 Vikram V. Ramaswamy, Sunnie S. Y. Kim, Nicole Meister, Ruth Fong, Olga Russakovsky

Specifically, we develop a novel explanation framework ELUDE (Explanation via Labelled and Unlabelled DEcomposition) that decomposes a model's prediction into two parts: one that is explainable through a linear combination of the semantic attributes, and another that is dependent on the set of uninterpretable features.


HIVE: Evaluating the Human Interpretability of Visual Explanations

1 code implementation6 Dec 2021 Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

As AI technology is increasingly applied to high-impact, high-risk domains, there have been a number of new methods aimed at making AI models more human interpretable.

Decision Making

Debiasing Convolutional Neural Networks via Meta Orthogonalization

1 code implementation15 Nov 2020 Kurtis Evan David, Qiang Liu, Ruth Fong

While deep learning models often achieve strong task performance, their successes are hampered by their inability to disentangle spurious correlations from causative factors, such as when they use protected attributes (e. g., race, gender, etc.)

Word Embeddings

Multi-modal Self-Supervision from Generalized Data Transformations

no code implementations28 Sep 2020 Mandela Patrick, Yuki Asano, Polina Kuznetsova, Ruth Fong, Joao F. Henriques, Geoffrey Zweig, Andrea Vedaldi

In this paper, we show that, for videos, the answer is more complex, and that better results can be obtained by accounting for the interplay between invariance, distinctiveness, multiple modalities and time.

Audio Classification Retrieval +1

Contextual Semantic Interpretability

1 code implementation18 Sep 2020 Diego Marcos, Ruth Fong, Sylvain Lobry, Remi Flamary, Nicolas Courty, Devis Tuia

Once the attributes are learned, they can be re-combined to reach the final decision and provide both an accurate prediction and an explicit reasoning behind the CNN decision.

There and Back Again: Revisiting Backpropagation Saliency Methods

1 code implementation CVPR 2020 Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Andrea Vedaldi

Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample.


Occlusions for Effective Data Augmentation in Image Classification

no code implementations23 Oct 2019 Ruth Fong, Andrea Vedaldi

Deep networks for visual recognition are known to leverage "easy to recognise" portions of objects such as faces and distinctive texture patterns.

Classification Data Augmentation +2

NormGrad: Finding the Pixels that Matter for Training

no code implementations19 Oct 2019 Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Hakan Bilen, Andrea Vedaldi

In this paper, we are rather interested by the locations of an image that contribute to the model's training.


Understanding Deep Networks via Extremal Perturbations and Smooth Masks

2 code implementations ICCV 2019 Ruth Fong, Mandela Patrick, Andrea Vedaldi

In this paper, we discuss some of the shortcomings of existing approaches to perturbation analysis and address them by introducing the concept of extremal perturbations, which are theoretically grounded and interpretable.

Interpretable Machine Learning

Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks

1 code implementation CVPR 2018 Ruth Fong, Andrea Vedaldi

By studying such embeddings, we are able to show that 1., in most cases, multiple filters are required to code for a concept, that 2., often filters are not concept specific and help encode multiple concepts, and that 3., compared to single filter activations, filter embeddings are able to better characterize the meaning of a representation and its relationship to other concepts.

Interpretable Explanations of Black Boxes by Meaningful Perturbation

6 code implementations ICCV 2017 Ruth Fong, Andrea Vedaldi

As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions.

Interpretable Machine Learning

Using Human Brain Activity to Guide Machine Learning

no code implementations16 Mar 2017 Ruth Fong, Walter Scheirer, David Cox

The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.

BIG-bench Machine Learning Object Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.