no code implementations • 24 Jun 2024 • Katherine M. Collins, Valerie Chen, Ilia Sucholutsky, Hannah Rose Kirk, Malak Sadek, Holli Sargeant, Ameet Talwalkar, Adrian Weller, Umang Bhatt
Through a user study with real humans, we observe shifts in user behavior from the imposition of a friction over LLMs in the context of a multi-topic question-answering task as a representative task that people may use LLMs for, e. g., in education and information retrieval.
1 code implementation • 12 Jun 2024 • Sanyam Kapoor, Nate Gruver, Manley Roberts, Katherine Collins, Arka Pal, Umang Bhatt, Adrian Weller, Samuel Dooley, Micah Goldblum, Andrew Gordon Wilson
We show that a thousand graded examples are sufficient to outperform baseline methods and that training through the features of a model is necessary for good performance and tractable for large open-source models when using LoRA.
no code implementations • 6 Jun 2024 • Ilia Sucholutsky, Katherine M. Collins, Maya Malaviya, Nori Jacoby, Weiyang Liu, Theodore R. Sumers, Michalis Korakakis, Umang Bhatt, Mark Ho, Joshua B. Tenenbaum, Brad Love, Zachary A. Pardos, Adrian Weller, Thomas L. Griffiths
A good teacher should not only be knowledgeable; but should be able to communicate in a way that the student understands -- to share the student's representation of the world.
no code implementations • 28 Feb 2024 • Umang Bhatt, Holli Sargeant
Algorithmic resignation is a strategic approach for managing the use of artificial intelligence (AI) by embedding governance directly into AI systems.
no code implementations • 6 Feb 2024 • Sreejan Kumar, Raja Marjieh, Byron Zhang, Declan Campbell, Michael Y. Hu, Umang Bhatt, Brenden Lake, Thomas L. Griffiths
To investigate the effect language on the formation of abstractions, we implement a novel multimodal serial reproduction framework by asking people who receive a visual stimulus to reproduce it in a linguistic format, and vice versa.
no code implementations • 28 Jul 2023 • Matthew Barker, Emma Kallina, Dhananjay Ashok, Katherine M. Collins, Ashley Casovan, Adrian Weller, Ameet Talwalkar, Valerie Chen, Umang Bhatt
We propose FeedbackLogs, addenda to existing documentation of ML pipelines, to track the input of multiple stakeholders.
no code implementations • 14 Jun 2023 • Matthew Barker, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian Weller, Umang Bhatt
Concept-based models perform prediction using a set of concepts that are interpretable to stakeholders.
1 code implementation • 2 Jun 2023 • Katherine M. Collins, Albert Q. Jiang, Simon Frieder, Lionel Wong, Miri Zilka, Umang Bhatt, Thomas Lukasiewicz, Yuhuai Wu, Joshua B. Tenenbaum, William Hart, Timothy Gowers, Wenda Li, Adrian Weller, Mateja Jamnik
There is much excitement about the opportunity to harness the power of large language models (LLMs) when building problem-solving assistants.
no code implementations • 13 Apr 2023 • Umang Bhatt, Valerie Chen, Katherine M. Collins, Parameswaran Kamalaruban, Emma Kallina, Adrian Weller, Ameet Talwalkar
$\texttt{Modiste}$ leverages stochastic contextual bandit techniques to personalize a decision support policy for each decision-maker and supports extensions to the multi-objective setting to account for auxiliary objectives like the cost of support.
no code implementations • 22 Mar 2023 • Katherine M. Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham
We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans.
1 code implementation • 25 Jan 2023 • Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Adrian Weller, Mateja Jamnik
In this paper, we show that such metrics are not appropriate for concept learning and propose novel metrics for evaluating the purity of concept representations in both approaches.
1 code implementation • 2 Nov 2022 • Katherine M. Collins, Umang Bhatt, Weiyang Liu, Vihari Piratla, Ilia Sucholutsky, Bradley Love, Adrian Weller
We focus on the synthetic data used in mixup: a powerful regularizer shown to improve model robustness, generalization, and calibration.
no code implementations • 2 Nov 2022 • Ilia Sucholutsky, Ruairidh M. Battleday, Katherine M. Collins, Raja Marjieh, Joshua C. Peterson, Pulkit Singh, Umang Bhatt, Nori Jacoby, Adrian Weller, Thomas L. Griffiths
Supervised learning typically focuses on learning transferable representations from training examples annotated by humans.
1 code implementation • 31 Oct 2022 • Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf
We consider the problem of iterative machine teaching, where a teacher sequentially provides examples based on the status of a learner under a discrete input space (i. e., a pool of finite samples), which greatly limits the teacher's capability.
1 code implementation • 10 Oct 2022 • Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency
In particular, there are various considerations behind the pipeline: (1) the choice and (2) the size of PLM, (3) the choice of uncertainty quantifier, (4) the choice of fine-tuning loss, and many more.
no code implementations • 6 Jul 2022 • Ana Lucic, Sheeraz Ahmad, Amanda Furtado Brinhosa, Vera Liao, Himani Agrawal, Umang Bhatt, Krishnaram Kenthapadi, Alice Xiang, Maarten de Rijke, Nicholas Drabowski
In this paper, we report on ongoing work regarding (i) the development of an AI system for flagging and explaining low-quality medical images in real-time, (ii) an interview study to understand the explanation needs of stakeholders using the AI system at OurCompany, and, (iii) a longitudinal user study design to examine the effect of including explanations on the workflow of the technicians in our clinics.
1 code implementation • 2 Jul 2022 • Katherine M. Collins, Umang Bhatt, Adrian Weller
Our elicitation methodology therefore shows nuanced promise in enabling practitioners to enjoy the benefits of improved model performance and reliability with fewer annotators, and serves as a guide for future dataset curators on the benefits of leveraging richer information, such as categorical uncertainty, from individual annotators.
no code implementations • 13 May 2022 • Valerie Chen, Umang Bhatt, Hoda Heidari, Adrian Weller, Ameet Talwalkar
A practitioner may receive feedback from an expert at the observation- or domain-level, and convert this feedback into updates to the dataset, loss function, or parameter space.
no code implementations • 3 May 2022 • Varun Babbar, Umang Bhatt, Adrian Weller
We explore how such prediction sets impact expert decision-making in human-AI teams.
1 code implementation • 2 Feb 2022 • Javier Abad, Umang Bhatt, Adrian Weller, Giovanni Cherubin
We prove that our method is a consistent approximation of full CP, and empirically show that the approximation error becomes smaller as the training set increases; e. g., for $10^{3}$ training points the two methods output p-values that are $<10^{-3}$ apart: a negligible error for any practical application.
no code implementations • 5 Dec 2021 • Dan Ley, Umang Bhatt, Adrian Weller
To interpret uncertainty estimates from differentiable probabilistic models, recent work has proposed generating a single Counterfactual Latent Uncertainty Explanation (CLUE) for a given data point where the model is uncertain, identifying a single, on-manifold change to the input such that the model becomes more certain in its prediction.
no code implementations • 29 Sep 2021 • Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Mateja Jamnik
Recent work on Explainable AI has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts.
1 code implementation • 13 Jul 2021 • Umang Bhatt, Isabel Chien, Muhammad Bilal Zafar, Adrian Weller
In this work, we take a step towards finding influential training points that also represent the training data well.
1 code implementation • 10 May 2021 • Andrei Margeloiu, Matthew Ashman, Umang Bhatt, Yanzhi Chen, Mateja Jamnik, Adrian Weller
Concept bottleneck models map from raw inputs to concepts, and then from concepts to targets.
no code implementations • 13 Apr 2021 • Dan Ley, Umang Bhatt, Adrian Weller
To interpret uncertainty estimates from differentiable probabilistic models, recent work has proposed generating Counterfactual Latent Uncertainty Explanations (CLUEs).
no code implementations • 15 Nov 2020 • Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, Alice Xiang
Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders.
1 code implementation • 13 Oct 2020 • Julius von Kügelgen, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, Bernhard Schölkopf
Algorithmic fairness is typically studied from the perspective of predictions.
no code implementations • 10 Jul 2020 • Umang Bhatt, McKane Andrus, Adrian Weller, Alice Xiang
As machine learning is increasingly deployed in high-stakes contexts affecting people's livelihoods, there have been growing calls to open the black box and to make machine learning algorithms more explainable.
3 code implementations • ICLR 2021 • Javier Antorán, Umang Bhatt, Tameem Adel, Adrian Weller, José Miguel Hernández-Lobato
Both uncertainty estimation and interpretability are important factors for trustworthy machine learning systems.
no code implementations • 1 May 2020 • Umang Bhatt, Adrian Weller, José M. F. Moura
A feature-based model explanation denotes how much each input feature contributes to a model's output for a given data point.
no code implementations • 13 Sep 2019 • Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, Peter Eckersley
Yet there is little understanding of how organizations use these methods in practice.
no code implementations • 20 Jan 2019 • Umang Bhatt, Pradeep Ravikumar, Jose M. F. Moura
Current approaches for explaining machine learning models fall into two distinct classes: antecedent event influence and value attribution.
no code implementations • 20 Jan 2019 • Brian Davis, Umang Bhatt, Kartikeya Bhardwaj, Radu Marculescu, José M. F. Moura
In this paper, we present a new approach to interpret deep learning models.
1 code implementation • 10 Jun 2018 • Aaron M. Roth, Umang Bhatt, Tamara Amin, Afsaneh Doryab, Fei Fang, Manuela Veloso
In this pilot study, we investigate (1) in what way a robot can express a certain mood to influence a human's decision making behavioral model; (2) how and to what extent the human will be influenced in a game theoretic setting.