Search Results for author: Jörn-Henrik Jacobsen

Found 19 papers, 11 papers with code

Environment Inference for Invariant Learning

1 code implementation14 Oct 2020 Elliot Creager, Jörn-Henrik Jacobsen, Richard Zemel

Learning models that gracefully handle distribution shifts is central to research on domain generalization, robust optimization, and fairness.

Domain Generalization Fairness

Understanding and Mitigating Exploding Inverses in Invertible Neural Networks

1 code implementation16 Jun 2020 Jens Behrmann, Paul Vicol, Kuan-Chieh Wang, Roger Grosse, Jörn-Henrik Jacobsen

For problems where global invertibility is necessary, such as applying normalizing flows on OOD data, we show the importance of designing stable INN building blocks.

Shortcut Learning in Deep Neural Networks

2 code implementations16 Apr 2020 Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann

Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.

How to train your neural ODE: the world of Jacobian and kinetic regularization

2 code implementations ICML 2020 Chris Finlay, Jörn-Henrik Jacobsen, Levon Nurbekyan, Adam M. Oberman

Training neural ODEs on large datasets has not been tractable due to the necessity of allowing the adaptive numerical ODE solver to refine its step size to very small values.

Density Estimation Image Generation

On the Invertibility of Invertible Neural Networks

no code implementations25 Sep 2019 Jens Behrmann, Paul Vicol, Kuan-Chieh Wang, Roger B. Grosse, Jörn-Henrik Jacobsen

Guarantees in deep learning are hard to achieve due to the interplay of flexible modeling schemes and complex tasks.

Flexibly Fair Representation Learning by Disentanglement

no code implementations6 Jun 2019 Elliot Creager, David Madras, Jörn-Henrik Jacobsen, Marissa A. Weis, Kevin Swersky, Toniann Pitassi, Richard Zemel

We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes.

Fairness General Classification +1

Residual Flows for Invertible Generative Modeling

4 code implementations NeurIPS 2019 Ricky T. Q. Chen, Jens Behrmann, David Duvenaud, Jörn-Henrik Jacobsen

Flow-based generative models parameterize probability distributions through an invertible transformation and can be trained by maximum likelihood.

Density Estimation Image Generation

Invertible Residual Networks

4 code implementations2 Nov 2018 Jens Behrmann, Will Grathwohl, Ricky T. Q. Chen, David Duvenaud, Jörn-Henrik Jacobsen

We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation.

Density Estimation General Classification +1

Excessive Invariance Causes Adversarial Vulnerability

no code implementations ICLR 2019 Jörn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, Matthias Bethge

Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs.

i-RevNet: Deep Invertible Networks

2 code implementations ICLR 2018 Jörn-Henrik Jacobsen, Arnold Smeulders, Edouard Oyallon

An analysis of i-RevNets learned representations suggests an alternative explanation for the success of deep networks by a progressive contraction and linear separation with depth.

Dynamic Steerable Blocks in Deep Residual Networks

no code implementations2 Jun 2017 Jörn-Henrik Jacobsen, Bert de Brabandere, Arnold W. M. Smeulders

Filters in convolutional networks are typically parameterized in a pixel basis, that does not take prior knowledge about the visual world into account.

Contour Detection

Multiscale Hierarchical Convolutional Networks

no code implementations12 Mar 2017 Jörn-Henrik Jacobsen, Edouard Oyallon, Stéphane Mallat, Arnold W. M. Smeulders

Multiscale hierarchical convolutional networks are structured deep convolutional networks where layers are indexed by progressively higher dimensional attributes, which are learned from training data.

Structured Receptive Fields in CNNs

3 code implementations CVPR 2016 Jörn-Henrik Jacobsen, Jan van Gemert, Zhongyu Lou, Arnold W. M. Smeulders

We combine these ideas into structured receptive field networks, a model which has a fixed filter basis and yet retains the flexibility of CNNs.

Cannot find the paper you are looking for? You can Submit a new open access paper.