Search Results for author: Mateo Espinosa Zarlenga

Found 11 papers, 7 papers with code

Efficient Bias Mitigation Without Privileged Information

no code implementations26 Sep 2024 Mateo Espinosa Zarlenga, Swami Sankaranarayanan, Jerone T. A. Andrews, Zohreh Shams, Mateja Jamnik, Alice Xiang

Deep neural networks trained via empirical risk minimisation often exhibit significant performance disparities across groups, particularly when group and task labels are spuriously correlated (e. g., "grassy background" and "cows").

Model Selection

Understanding Inter-Concept Relationships in Concept-Based Models

1 code implementation28 May 2024 Naveen Raman, Mateo Espinosa Zarlenga, Mateja Jamnik

Concept-based explainability methods provide insight into deep learning systems by constructing explanations using human-understandable concepts.

Causal Concept Graph Models: Beyond Causal Opacity in Deep Learning

no code implementations26 May 2024 Gabriele Dominici, Pietro Barbiero, Mateo Espinosa Zarlenga, Alberto Termine, Martin Gjoreski, Giuseppe Marra, Marc Langheinrich

Causal opacity denotes the difficulty in understanding the "hidden" causal structure underlying the decisions of deep neural network (DNN) models.

counterfactual Decision Making +2

Do Concept Bottleneck Models Respect Localities?

1 code implementation2 Jan 2024 Naveen Raman, Mateo Espinosa Zarlenga, Juyeon Heo, Mateja Jamnik

These models require accurate concept predictors, yet the faithfulness of existing concept predictors to their underlying concepts is unclear.

Learning to Receive Help: Intervention-Aware Concept Embedding Models

1 code implementation NeurIPS 2023 Mateo Espinosa Zarlenga, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian Weller, Zohreh Shams, Mateja Jamnik

To address this, we propose Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based architecture and training paradigm that improves a model's receptiveness to test-time interventions.

Human Uncertainty in Concept-Based AI Systems

no code implementations22 Mar 2023 Katherine M. Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham

We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans.

Decision Making

Towards Robust Metrics for Concept Representation Evaluation

1 code implementation25 Jan 2023 Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Adrian Weller, Mateja Jamnik

In this paper, we show that such metrics are not appropriate for concept learning and propose novel metrics for evaluating the purity of concept representations in both approaches.

Benchmarking Disentanglement

Efficient Decompositional Rule Extraction for Deep Neural Networks

1 code implementation24 Nov 2021 Mateo Espinosa Zarlenga, Zohreh Shams, Mateja Jamnik

In recent years, there has been significant work on increasing both interpretability and debuggability of a Deep Neural Network (DNN) by extracting a rule-based model that approximates its decision boundary.

On The Quality Assurance Of Concept-Based Representations

no code implementations29 Sep 2021 Mateo Espinosa Zarlenga, Pietro Barbiero, Zohreh Shams, Dmitry Kazhdan, Umang Bhatt, Mateja Jamnik

Recent work on Explainable AI has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts.

Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.