Search Results for author: Mateo Espinosa Zarlenga

Found 8 papers, 5 papers with code

Do Concept Bottleneck Models Obey Locality?

no code implementations2 Jan 2024 Naveen Raman, Mateo Espinosa Zarlenga, Juyeon Heo, Mateja Jamnik

Deep learning models trained under this paradigm heavily rely on the assumption that neural networks can learn to predict the presence or absence of a given concept independently of other concepts.

Learning to Receive Help: Intervention-Aware Concept Embedding Models

1 code implementation NeurIPS 2023 Mateo Espinosa Zarlenga, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian Weller, Zohreh Shams, Mateja Jamnik

To address this, we propose Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based architecture and training paradigm that improves a model's receptiveness to test-time interventions.

Human Uncertainty in Concept-Based AI Systems

no code implementations22 Mar 2023 Katherine M. Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham

We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans.

Decision Making

Towards Robust Metrics for Concept Representation Evaluation

1 code implementation25 Jan 2023 Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Adrian Weller, Mateja Jamnik

In this paper, we show that such metrics are not appropriate for concept learning and propose novel metrics for evaluating the purity of concept representations in both approaches.

Benchmarking Disentanglement

Efficient Decompositional Rule Extraction for Deep Neural Networks

1 code implementation24 Nov 2021 Mateo Espinosa Zarlenga, Zohreh Shams, Mateja Jamnik

In recent years, there has been significant work on increasing both interpretability and debuggability of a Deep Neural Network (DNN) by extracting a rule-based model that approximates its decision boundary.

On The Quality Assurance Of Concept-Based Representations

no code implementations29 Sep 2021 Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Mateja Jamnik

Recent work on Explainable AI has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts.

Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.