Search Results for author: Jonathan Crabbé

Found 15 papers, 12 papers with code

DAGnosis: Localized Identification of Data Inconsistencies using Structures

2 code implementations26 Feb 2024 Nicolas Huynh, Jeroen Berrevoets, Nabeel Seedat, Jonathan Crabbé, Zhaozhi Qian, Mihaela van der Schaar

Identification and appropriate handling of inconsistencies in data at deployment time is crucial to reliably use machine learning models.

Time Series Diffusion in the Frequency Domain

1 code implementation8 Feb 2024 Jonathan Crabbé, Nicolas Huynh, Jan Stanczuk, Mihaela van der Schaar

We explain this observation by showing that time series from these datasets tend to be more localized in the frequency domain than in the time domain, which makes them easier to model in the former case.

Denoising Inductive Bias +1

TRIAGE: Characterizing and auditing training data for improved regression

2 code implementations NeurIPS 2023 Nabeel Seedat, Jonathan Crabbé, Zhaozhi Qian, Mihaela van der Schaar

Data quality is crucial for robust machine learning algorithms, with the recent interest in data-centric AI emphasizing the importance of training data characterization.

regression

Robust multimodal models have outlier features and encode more concepts

no code implementations19 Oct 2023 Jonathan Crabbé, Pau Rodríguez, Vaishaal Shankar, Luca Zappella, Arno Blaas

In this work, we bridge this gap by probing the representation spaces of 12 robust multimodal models with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp).

Evaluating the Robustness of Interpretability Methods through Explanation Invariance and Equivariance

2 code implementations NeurIPS 2023 Jonathan Crabbé, Mihaela van der Schaar

Through this rigorous formalism, we derive (1) two metrics to measure the robustness of any interpretability method with respect to the model symmetry group; (2) theoretical robustness guarantees for some popular interpretability methods and (3) a systematic approach to increase the invariance of any interpretability method with respect to a symmetry group.

TANGOS: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization

2 code implementations9 Mar 2023 Alan Jeffares, Tennison Liu, Jonathan Crabbé, Fergus Imrie, Mihaela van der Schaar

In this work, we introduce Tabular Neural Gradient Orthogonalization and Specialization (TANGOS), a novel framework for regularization in the tabular setting built on latent unit attributions.

Data-IQ: Characterizing subgroups with heterogeneous outcomes in tabular data

2 code implementations24 Oct 2022 Nabeel Seedat, Jonathan Crabbé, Ioana Bica, Mihaela van der Schaar

High model performance, on average, can hide that models may systematically underperform on subgroups of the data.

Model Selection

Concept Activation Regions: A Generalized Framework For Concept-Based Explanations

2 code implementations22 Sep 2022 Jonathan Crabbé, Mihaela van der Schaar

We further demonstrate empirically that CARs offer (1) more accurate descriptions of how concepts are scattered in the DNN's latent space; (2) global explanations that are closer to human concept annotations and (3) concept-based feature importance that meaningfully relate concepts with each other.

Feature Importance

Benchmarking Heterogeneous Treatment Effect Models through the Lens of Interpretability

no code implementations16 Jun 2022 Jonathan Crabbé, Alicia Curth, Ioana Bica, Mihaela van der Schaar

This allows us to evaluate treatment effect estimators along a new and important dimension that has been overlooked in previous work: We construct a benchmarking environment to empirically investigate the ability of personalized treatment effect models to identify predictive covariates -- covariates that determine differential responses to treatment.

Benchmarking Feature Importance

Data-SUITE: Data-centric identification of in-distribution incongruous examples

1 code implementation17 Feb 2022 Nabeel Seedat, Jonathan Crabbé, Mihaela van der Schaar

These estimators can be used to evaluate the congruence of test instances with respect to the training set, to answer two practically useful questions: (1) which test instances will be reliably predicted by a model trained with the training instances?

Conformal Prediction Representation Learning

Explaining Latent Representations with a Corpus of Examples

1 code implementation NeurIPS 2021 Jonathan Crabbé, Zhaozhi Qian, Fergus Imrie, Mihaela van der Schaar

SimplEx uses the corpus to improve the user's understanding of the latent space with post-hoc explanations answering two questions: (1) Which corpus examples explain the prediction issued for a given test example?

Image Classification Mortality Prediction

Learning outside the Black-Box: The pursuit of interpretable models

1 code implementation NeurIPS 2020 Jonathan Crabbé, Yao Zhang, William Zame, Mihaela van der Schaar

Machine Learning has proved its ability to produce accurate models but the deployment of these models outside the machine learning community has been hindered by the difficulties of interpreting these models.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.