Search Results for author: Umang Bhatt

Found 31 papers, 11 papers with code

When Should Algorithms Resign?

no code implementations28 Feb 2024 Umang Bhatt, Holli Sargeant

This paper discusses algorithmic resignation, a strategic approach for managing the use of AI systems within organizations.

Comparing Abstraction in Humans and Large Language Models Using Multimodal Serial Reproduction

no code implementations6 Feb 2024 Sreejan Kumar, Raja Marjieh, Byron Zhang, Declan Campbell, Michael Y. Hu, Umang Bhatt, Brenden Lake, Thomas L. Griffiths

To investigate the effect language on the formation of abstractions, we implement a novel multimodal serial reproduction framework by asking people who receive a visual stimulus to reproduce it in a linguistic format, and vice versa.

Selective Concept Models: Permitting Stakeholder Customisation at Test-Time

no code implementations14 Jun 2023 Matthew Barker, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian Weller, Umang Bhatt

Concept-based models perform prediction using a set of concepts that are interpretable to stakeholders.

Learning Personalized Decision Support Policies

no code implementations13 Apr 2023 Umang Bhatt, Valerie Chen, Katherine M. Collins, Parameswaran Kamalaruban, Emma Kallina, Adrian Weller, Ameet Talwalkar

In this work, we propose learning a decision support policy that, for a given input, chooses which form of support, if any, to provide.

Multi-Armed Bandits

Human Uncertainty in Concept-Based AI Systems

no code implementations22 Mar 2023 Katherine M. Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham

We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans.

Decision Making

Towards Robust Metrics for Concept Representation Evaluation

1 code implementation25 Jan 2023 Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Adrian Weller, Mateja Jamnik

In this paper, we show that such metrics are not appropriate for concept learning and propose novel metrics for evaluating the purity of concept representations in both approaches.

Benchmarking Disentanglement

Human-in-the-Loop Mixup

1 code implementation2 Nov 2022 Katherine M. Collins, Umang Bhatt, Weiyang Liu, Vihari Piratla, Ilia Sucholutsky, Bradley Love, Adrian Weller

We focus on the synthetic data used in mixup: a powerful regularizer shown to improve model robustness, generalization, and calibration.

Iterative Teaching by Data Hallucination

1 code implementation31 Oct 2022 Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf

We consider the problem of iterative machine teaching, where a teacher sequentially provides examples based on the status of a learner under a discrete input space (i. e., a pool of finite samples), which greatly limits the teacher's capability.

Hallucination

Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis

1 code implementation10 Oct 2022 Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency

In particular, there are various considerations behind the pipeline: (1) the choice and (2) the size of PLM, (3) the choice of uncertainty quantifier, (4) the choice of fine-tuning loss, and many more.

Uncertainty Quantification

Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

no code implementations6 Jul 2022 Ana Lucic, Sheeraz Ahmad, Amanda Furtado Brinhosa, Vera Liao, Himani Agrawal, Umang Bhatt, Krishnaram Kenthapadi, Alice Xiang, Maarten de Rijke, Nicholas Drabowski

In this paper, we report on ongoing work regarding (i) the development of an AI system for flagging and explaining low-quality medical images in real-time, (ii) an interview study to understand the explanation needs of stakeholders using the AI system at OurCompany, and, (iii) a longitudinal user study design to examine the effect of including explanations on the workflow of the technicians in our clinics.

Explainable Artificial Intelligence (XAI)

Eliciting and Learning with Soft Labels from Every Annotator

1 code implementation2 Jul 2022 Katherine M. Collins, Umang Bhatt, Adrian Weller

Our elicitation methodology therefore shows nuanced promise in enabling practitioners to enjoy the benefits of improved model performance and reliability with fewer annotators, and serves as a guide for future dataset curators on the benefits of leveraging richer information, such as categorical uncertainty, from individual annotators.

Perspectives on Incorporating Expert Feedback into Model Updates

no code implementations13 May 2022 Valerie Chen, Umang Bhatt, Hoda Heidari, Adrian Weller, Ameet Talwalkar

A practitioner may receive feedback from an expert at the observation- or domain-level, and convert this feedback into updates to the dataset, loss function, or parameter space.

Approximating Full Conformal Prediction at Scale via Influence Functions

1 code implementation2 Feb 2022 Javier Abad, Umang Bhatt, Adrian Weller, Giovanni Cherubin

We prove that our method is a consistent approximation of full CP, and empirically show that the approximation error becomes smaller as the training set increases; e. g., for $10^{3}$ training points the two methods output p-values that are $<10^{-3}$ apart: a negligible error for any practical application.

Conformal Prediction

Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates

no code implementations5 Dec 2021 Dan Ley, Umang Bhatt, Adrian Weller

To interpret uncertainty estimates from differentiable probabilistic models, recent work has proposed generating a single Counterfactual Latent Uncertainty Explanation (CLUE) for a given data point where the model is uncertain, identifying a single, on-manifold change to the input such that the model becomes more certain in its prediction.

counterfactual

On The Quality Assurance Of Concept-Based Representations

no code implementations29 Sep 2021 Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Mateja Jamnik

Recent work on Explainable AI has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts.

Disentanglement

DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement

1 code implementation13 Jul 2021 Umang Bhatt, Isabel Chien, Muhammad Bilal Zafar, Adrian Weller

In this work, we take a step towards finding influential training points that also represent the training data well.

Data Visualization Fairness

Do Concept Bottleneck Models Learn as Intended?

no code implementations10 May 2021 Andrei Margeloiu, Matthew Ashman, Umang Bhatt, Yanzhi Chen, Mateja Jamnik, Adrian Weller

Concept bottleneck models map from raw inputs to concepts, and then from concepts to targets.

δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates

no code implementations13 Apr 2021 Dan Ley, Umang Bhatt, Adrian Weller

To interpret uncertainty estimates from differentiable probabilistic models, recent work has proposed generating Counterfactual Latent Uncertainty Explanations (CLUEs).

counterfactual

Machine Learning Explainability for External Stakeholders

no code implementations10 Jul 2020 Umang Bhatt, McKane Andrus, Adrian Weller, Alice Xiang

As machine learning is increasingly deployed in high-stakes contexts affecting people's livelihoods, there have been growing calls to open the black box and to make machine learning algorithms more explainable.

BIG-bench Machine Learning

Evaluating and Aggregating Feature-based Model Explanations

no code implementations1 May 2020 Umang Bhatt, Adrian Weller, José M. F. Moura

A feature-based model explanation denotes how much each input feature contributes to a model's output for a given data point.

Towards Aggregating Weighted Feature Attributions

no code implementations20 Jan 2019 Umang Bhatt, Pradeep Ravikumar, Jose M. F. Moura

Current approaches for explaining machine learning models fall into two distinct classes: antecedent event influence and value attribution.

Attribute

The Impact of Humanoid Affect Expression on Human Behavior in a Game-Theoretic Setting

1 code implementation10 Jun 2018 Aaron M. Roth, Umang Bhatt, Tamara Amin, Afsaneh Doryab, Fei Fang, Manuela Veloso

In this pilot study, we investigate (1) in what way a robot can express a certain mood to influence a human's decision making behavioral model; (2) how and to what extent the human will be influenced in a game theoretic setting.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.