no code implementations • 12 Feb 2025 • Sunnie S. Y. Kim, Jennifer Wortman Vaughan, Q. Vera Liao, Tania Lombrozo, Olga Russakovsky
Large language models (LLMs) can produce erroneous responses that sound fluent and convincing, raising the risk that users will rely on these responses as if they were correct.
no code implementations • 1 May 2024 • Sunnie S. Y. Kim, Q. Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, Jennifer Wortman Vaughan
However, there has been little empirical work examining how users perceive and act upon LLMs' expressions of uncertainty.
1 code implementation • 8 Apr 2024 • Giang Nguyen, Mohammad Reza Taesiri, Sunnie S. Y. Kim, Anh Nguyen
We build CHM-Corr++, an interactive interface for CHM-Corr, enabling users to edit the feature importance map provided by CHM-Corr and observe updated model decisions.
no code implementations • 22 Sep 2023 • Doris Antensteiner, Marah Halawa, Asra Aslam, Ivaxi Sheth, Sachini Herath, Ziqi Huang, Sunnie S. Y. Kim, Aparna Akula, Xin Wang
In this paper, we present the details of Women in Computer Vision Workshop - WiCV 2023, organized alongside the hybrid CVPR 2023 in Vancouver, Canada.
no code implementations • 15 May 2023 • Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández
Trust is an important factor in people's interactions with AI systems.
no code implementations • 27 Mar 2023 • Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky
In this work, we propose UFO, a unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations.
no code implementations • 2 Oct 2022 • Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations.
1 code implementation • CVPR 2023 • Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky
Second, we find that concepts in the probe dataset are often less salient and harder to learn than the classes they claim to explain, calling into question the correctness of the explanations.
no code implementations • 15 Jun 2022 • Vikram V. Ramaswamy, Sunnie S. Y. Kim, Nicole Meister, Ruth Fong, Olga Russakovsky
Specifically, we develop a novel explanation framework ELUDE (Explanation via Labelled and Unlabelled DEcomposition) that decomposes a model's prediction into two parts: one that is explainable through a linear combination of the semantic attributes, and another that is dependent on the set of uninterpretable features.
1 code implementation • 6 Dec 2021 • Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky
As AI technology is increasingly applied to high-impact, high-risk domains, there have been a number of new methods aimed at making AI models more human interpretable.
1 code implementation • 1 Jun 2021 • Vivien Nguyen, Sunnie S. Y. Kim
The iMet 2020 dataset is a valuable resource in the space of fine-grained art attribution recognition, but we believe it has yet to reach its true potential.
1 code implementation • RC 2020 • Sunnie S. Y. Kim, Sharon Zhang, Nicole Meister, Olga Russakovsky
The implementation of most (7 of 10) methods was straightforward, especially after we received additional details from the original authors.
1 code implementation • CVPR 2021 • Pedro Savarese, Sunnie S. Y. Kim, Michael Maire, Greg Shakhnarovich, David Mcallester
We study image segmentation from an information-theoretic perspective, proposing a novel adversarial method that performs unsupervised segmentation by partitioning images into maximally independent sets.
Ranked #1 on
Unsupervised Image Segmentation
on Flowers
1 code implementation • CVPR 2021 • Vikram V. Ramaswamy, Sunnie S. Y. Kim, Olga Russakovsky
Fairness in visual recognition is becoming a prominent and critical topic of discussion as recognition systems are deployed at scale in the real world.
1 code implementation • ECCV 2020 • Sunnie S. Y. Kim, Nicholas Kolkin, Jason Salavon, Gregory Shakhnarovich
Both geometry and texture are fundamental aspects of visual style.