2 code implementations • 26 Feb 2024 • Nicolas Huynh, Jeroen Berrevoets, Nabeel Seedat, Jonathan Crabbé, Zhaozhi Qian, Mihaela van der Schaar
Identification and appropriate handling of inconsistencies in data at deployment time is crucial to reliably use machine learning models.
1 code implementation • 8 Feb 2024 • Jonathan Crabbé, Nicolas Huynh, Jan Stanczuk, Mihaela van der Schaar
We explain this observation by showing that time series from these datasets tend to be more localized in the frequency domain than in the time domain, which makes them easier to model in the former case.
no code implementations • 6 Dec 2023 • Claudio Zeni, Robert Pinsler, Daniel Zügner, Andrew Fowler, Matthew Horton, Xiang Fu, Sasha Shysheya, Jonathan Crabbé, Lixin Sun, Jake Smith, Bichlien Nguyen, Hannes Schulz, Sarah Lewis, Chin-wei Huang, Ziheng Lu, Yichi Zhou, Han Yang, Hongxia Hao, Jielan Li, Ryota Tomioka, Tian Xie
We further introduce adapter modules to enable fine-tuning towards any given property constraints with a labeled dataset.
2 code implementations • NeurIPS 2023 • Nabeel Seedat, Jonathan Crabbé, Zhaozhi Qian, Mihaela van der Schaar
Data quality is crucial for robust machine learning algorithms, with the recent interest in data-centric AI emphasizing the importance of training data characterization.
no code implementations • 19 Oct 2023 • Jonathan Crabbé, Pau Rodríguez, Vaishaal Shankar, Luca Zappella, Arno Blaas
In this work, we bridge this gap by probing the representation spaces of 12 robust multimodal models with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp).
2 code implementations • NeurIPS 2023 • Jonathan Crabbé, Mihaela van der Schaar
Through this rigorous formalism, we derive (1) two metrics to measure the robustness of any interpretability method with respect to the model symmetry group; (2) theoretical robustness guarantees for some popular interpretability methods and (3) a systematic approach to increase the invariance of any interpretability method with respect to a symmetry group.
2 code implementations • 9 Mar 2023 • Alan Jeffares, Tennison Liu, Jonathan Crabbé, Fergus Imrie, Mihaela van der Schaar
In this work, we introduce Tabular Neural Gradient Orthogonalization and Specialization (TANGOS), a novel framework for regularization in the tabular setting built on latent unit attributions.
2 code implementations • 24 Oct 2022 • Nabeel Seedat, Jonathan Crabbé, Ioana Bica, Mihaela van der Schaar
High model performance, on average, can hide that models may systematically underperform on subgroups of the data.
2 code implementations • 22 Sep 2022 • Jonathan Crabbé, Mihaela van der Schaar
We further demonstrate empirically that CARs offer (1) more accurate descriptions of how concepts are scattered in the DNN's latent space; (2) global explanations that are closer to human concept annotations and (3) concept-based feature importance that meaningfully relate concepts with each other.
no code implementations • 16 Jun 2022 • Jonathan Crabbé, Alicia Curth, Ioana Bica, Mihaela van der Schaar
This allows us to evaluate treatment effect estimators along a new and important dimension that has been overlooked in previous work: We construct a benchmarking environment to empirically investigate the ability of personalized treatment effect models to identify predictive covariates -- covariates that determine differential responses to treatment.
3 code implementations • 3 Mar 2022 • Jonathan Crabbé, Mihaela van der Schaar
Unsupervised black-box models are challenging to interpret.
1 code implementation • 17 Feb 2022 • Nabeel Seedat, Jonathan Crabbé, Mihaela van der Schaar
These estimators can be used to evaluate the congruence of test instances with respect to the training set, to answer two practically useful questions: (1) which test instances will be reliably predicted by a model trained with the training instances?
1 code implementation • NeurIPS 2021 • Jonathan Crabbé, Zhaozhi Qian, Fergus Imrie, Mihaela van der Schaar
SimplEx uses the corpus to improve the user's understanding of the latent space with post-hoc explanations answering two questions: (1) Which corpus examples explain the prediction issued for a given test example?
1 code implementation • 9 Jun 2021 • Jonathan Crabbé, Mihaela van der Schaar
How can we explain the predictions of a machine learning model?
1 code implementation • NeurIPS 2020 • Jonathan Crabbé, Yao Zhang, William Zame, Mihaela van der Schaar
Machine Learning has proved its ability to produce accurate models but the deployment of these models outside the machine learning community has been hindered by the difficulties of interpreting these models.