no code implementations • 23 Dec 2024 • Laura Manduchi, Antoine Wehenkel, Jens Behrmann, Luca Pegolotti, Andy C. Miller, Ozan Sener, Marco Cuturi, Guillermo Sapiro, Jörn-Henrik Jacobsen
Whole-body hemodynamics simulators, which model blood flow and pressure waveforms as functions of physiological parameters, are now essential tools for studying cardiovascular systems.
no code implementations • 25 Oct 2024 • Arno Blaas, Adam Goliński, Andrew Miller, Luca Zappella, Jörn-Henrik Jacobsen, Christina Heinze-Deml
We consider robustness to distribution shifts in the context of diagnostic models in healthcare, where the prediction target $Y$, e. g., the presence of a disease, is causally upstream of the observations $X$, e. g., a biomarker.
no code implementations • 14 May 2024 • Antoine Wehenkel, Juan L. Gamella, Ozan Sener, Jens Behrmann, Guillermo Sapiro, Marco Cuturi, Jörn-Henrik Jacobsen
Driven by steady progress in generative modeling, simulation-based inference (SBI) has enabled inference over stochastic simulators.
no code implementations • 26 Jul 2023 • Antoine Wehenkel, Laura Manduchi, Jens Behrmann, Luca Pegolotti, Andrew C. Miller, Guillermo Sapiro, Ozan Sener, Marco Cuturi, Jörn-Henrik Jacobsen
Over the past decades, hemodynamics simulators have steadily evolved and have become tools of choice for studying cardiovascular systems in-silico.
1 code implementation • 8 Feb 2022 • Antoine Wehenkel, Jens Behrmann, Hsiang Hsu, Guillermo Sapiro, Gilles Louppe, Jörn-Henrik Jacobsen
Hybrid modelling reduces the misspecification of expert models by combining them with machine learning (ML) components learned from data.
1 code implementation • 1 Dec 2021 • Mark Goldstein, Jörn-Henrik Jacobsen, Olina Chau, Adriel Saporta, Aahlad Puli, Rajesh Ranganath, Andrew C. Miller
Enforcing such independencies requires nuisances to be observed during training.
1 code implementation • 14 Oct 2020 • Elliot Creager, Jörn-Henrik Jacobsen, Richard Zemel
Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness.
Ranked #1 on
Out-of-Distribution Generalization
on ImageNet-W
1 code implementation • 16 Jun 2020 • Jens Behrmann, Paul Vicol, Kuan-Chieh Wang, Roger Grosse, Jörn-Henrik Jacobsen
For problems where global invertibility is necessary, such as applying normalizing flows on OOD data, we show the importance of designing stable INN building blocks.
2 code implementations • 16 Apr 2020 • Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann
Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.
1 code implementation • ICML 2020 • Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen
Adversarial examples are malicious inputs crafted to induce misclassification.
2 code implementations • ICML 2020 • Chris Finlay, Jörn-Henrik Jacobsen, Levon Nurbekyan, Adam M. Oberman
Training neural ODEs on large datasets has not been tractable due to the necessity of allowing the adaptive numerical ODE solver to refine its step size to very small values.
Ranked #1 on
Density Estimation
on CelebA-HQ 256x256
4 code implementations • ICLR 2020 • Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky
In this setting, the standard class probabilities can be easily computed as well as unnormalized values of p(x) and p(x|y).
1 code implementation • NeurIPS 2019 • Qiyang Li, Saminul Haque, Cem Anil, James Lucas, Roger Grosse, Jörn-Henrik Jacobsen
Our BCOP parameterization allows us to train large convolutional networks with provable Lipschitz bounds.
no code implementations • 25 Sep 2019 • Jens Behrmann, Paul Vicol, Kuan-Chieh Wang, Roger B. Grosse, Jörn-Henrik Jacobsen
Guarantees in deep learning are hard to achieve due to the interplay of flexible modeling schemes and complex tasks.
no code implementations • 6 Jun 2019 • Elliot Creager, David Madras, Jörn-Henrik Jacobsen, Marissa A. Weis, Kevin Swersky, Toniann Pitassi, Richard Zemel
We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes.
4 code implementations • NeurIPS 2019 • Ricky T. Q. Chen, Jens Behrmann, David Duvenaud, Jörn-Henrik Jacobsen
Flow-based generative models parameterize probability distributions through an invertible transformation and can be trained by maximum likelihood.
Ranked #2 on
Image Generation
on MNIST
no code implementations • ICLR 2020 • Ethan Fetaya, Jörn-Henrik Jacobsen, Will Grathwohl, Richard Zemel
Class-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts.
no code implementations • 25 Mar 2019 • Jörn-Henrik Jacobsen, Jens Behrmannn, Nicholas Carlini, Florian Tramèr, Nicolas Papernot
Excessive invariance is not limited to models trained to be robust to perturbation-based $\ell_p$-norm adversaries.
5 code implementations • 2 Nov 2018 • Jens Behrmann, Will Grathwohl, Ricky T. Q. Chen, David Duvenaud, Jörn-Henrik Jacobsen
We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation.
Ranked #5 on
Image Generation
on MNIST
no code implementations • ICLR 2019 • Jörn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, Matthias Bethge
Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs.
2 code implementations • ICLR 2018 • Jörn-Henrik Jacobsen, Arnold Smeulders, Edouard Oyallon
An analysis of i-RevNets learned representations suggests an alternative explanation for the success of deep networks by a progressive contraction and linear separation with depth.
no code implementations • 2 Jun 2017 • Jörn-Henrik Jacobsen, Bert de Brabandere, Arnold W. M. Smeulders
Filters in convolutional networks are typically parameterized in a pixel basis, that does not take prior knowledge about the visual world into account.
no code implementations • 12 Mar 2017 • Jörn-Henrik Jacobsen, Edouard Oyallon, Stéphane Mallat, Arnold W. M. Smeulders
Multiscale hierarchical convolutional networks are structured deep convolutional networks where layers are indexed by progressively higher dimensional attributes, which are learned from training data.
3 code implementations • CVPR 2016 • Jörn-Henrik Jacobsen, Jan van Gemert, Zhongyu Lou, Arnold W. M. Smeulders
We combine these ideas into structured receptive field networks, a model which has a fixed filter basis and yet retains the flexibility of CNNs.