Search Results for author: Sunnie S. Y. Kim

Found 13 papers, 8 papers with code

Allowing humans to interactively guide machines where to look does not always improve human-AI team's classification accuracy

1 code implementation8 Apr 2024 Giang Nguyen, Mohammad Reza Taesiri, Sunnie S. Y. Kim, Anh Nguyen

We build CHM-Corr++, an interactive interface for CHM-Corr, enabling users to edit the feature attribution map provided by CHM-Corr and observe updated model decisions.

Feature Importance Image Classification

WiCV@CVPR2023: The Eleventh Women In Computer Vision Workshop at the Annual CVPR Conference

no code implementations22 Sep 2023 Doris Antensteiner, Marah Halawa, Asra Aslam, Ivaxi Sheth, Sachini Herath, Ziqi Huang, Sunnie S. Y. Kim, Aparna Akula, Xin Wang

In this paper, we present the details of Women in Computer Vision Workshop - WiCV 2023, organized alongside the hybrid CVPR 2023 in Vancouver, Canada.

UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

no code implementations27 Mar 2023 Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky

In this work, we propose UFO, a unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations.

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

no code implementations2 Oct 2022 Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández

Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations.

Explainable Artificial Intelligence (XAI)

Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability

1 code implementation CVPR 2023 Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky

Second, we find that concepts in the probe dataset are often less salient and harder to learn than the classes they claim to explain, calling into question the correctness of the explanations.

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

no code implementations15 Jun 2022 Vikram V. Ramaswamy, Sunnie S. Y. Kim, Nicole Meister, Ruth Fong, Olga Russakovsky

Specifically, we develop a novel explanation framework ELUDE (Explanation via Labelled and Unlabelled DEcomposition) that decomposes a model's prediction into two parts: one that is explainable through a linear combination of the semantic attributes, and another that is dependent on the set of uninterpretable features.

Attribute

HIVE: Evaluating the Human Interpretability of Visual Explanations

1 code implementation6 Dec 2021 Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

As AI technology is increasingly applied to high-impact, high-risk domains, there have been a number of new methods aimed at making AI models more human interpretable.

Decision Making

Cleaning and Structuring the Label Space of the iMet Collection 2020

1 code implementation1 Jun 2021 Vivien Nguyen, Sunnie S. Y. Kim

The iMet 2020 dataset is a valuable resource in the space of fine-grained art attribution recognition, but we believe it has yet to reach its true potential.

Attribute

[Re] Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias

1 code implementation RC 2020 Sunnie S. Y. Kim, Sharon Zhang, Nicole Meister, Olga Russakovsky

The implementation of most (7 of 10) methods was straightforward, especially after we received additional details from the original authors.

Attribute

Information-Theoretic Segmentation by Inpainting Error Maximization

1 code implementation CVPR 2021 Pedro Savarese, Sunnie S. Y. Kim, Michael Maire, Greg Shakhnarovich, David Mcallester

We study image segmentation from an information-theoretic perspective, proposing a novel adversarial method that performs unsupervised segmentation by partitioning images into maximally independent sets.

Image Segmentation Segmentation +2

Fair Attribute Classification through Latent Space De-biasing

1 code implementation CVPR 2021 Vikram V. Ramaswamy, Sunnie S. Y. Kim, Olga Russakovsky

Fairness in visual recognition is becoming a prominent and critical topic of discussion as recognition systems are deployed at scale in the real world.

Attribute Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.