Search Results for author: Bernhard Schölkopf

Found 232 papers, 76 papers with code

Learning soft interventions in complex equilibrium systems

no code implementations10 Dec 2021 Michel Besserve, Bernhard Schölkopf

Complex systems often contain feedback loops that can be described as cyclic causal models.

Towards Principled Disentanglement for Domain Generalization

1 code implementation27 Nov 2021 HANLIN ZHANG, Yi-Fan Zhang, Weiyang Liu, Adrian Weller, Bernhard Schölkopf, Eric P. Xing

To tackle this challenge, we first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG).

Domain Generalization

Group equivariant neural posterior estimation

no code implementations25 Nov 2021 Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Deistler, Bernhard Schölkopf, Jakob H. Macke

We here describe an alternative method to incorporate equivariances under joint transformations of parameters and data.

Cause-effect inference through spectral independence in linear dynamical systems: theoretical foundations

no code implementations29 Oct 2021 Michel Besserve, Naji Shajarisales, Dominik Janzing, Bernhard Schölkopf

A new perspective has been provided based on the principle of Independence of Causal Mechanisms (ICM), leading to the Spectral Independence Criterion (SIC), postulating that the power spectral density (PSD) of the cause time series is uncorrelated with the squared modulus of the frequency response of the filter generating the effect.

Causal Discovery Causal Inference +1

Resampling Base Distributions of Normalizing Flows

1 code implementation29 Oct 2021 Vincent Stimper, Bernhard Schölkopf, José Miguel Hernández-Lobato

Normalizing flows are a popular class of models for approximating probability distributions.

Ranked #34 on Image Generation on CIFAR-10 (bits/dimension metric)

Density Estimation Image Generation

GalilAI: Out-of-Task Distribution Detection using Causal Active Experimentation for Safe Transfer RL

no code implementations29 Oct 2021 Sumedh A Sontakke, Stephen Iota, Zizhao Hu, Arash Mehrjou, Laurent Itti, Bernhard Schölkopf

Extending the successes in supervised learning methods to the reinforcement learning (RL) setting, however, is difficult due to the data generating process - RL agents actively query their environment for data, and the data are a function of the policy followed by the agent.

Iterative Teaching by Label Synthesis

no code implementations NeurIPS 2021 Weiyang Liu, Zhen Liu, Hanchen Wang, Liam Paull, Bernhard Schölkopf, Adrian Weller

In this paper, we consider the problem of iterative machine teaching, where a teacher provides examples sequentially based on the current iterative learner.

Distributional Robustness Regularized Scenario Optimization with Application to Model Predictive Control

no code implementations26 Oct 2021 Yassine Nemmour, Bernhard Schölkopf, Jia-Jie Zhu

We provide a functional view of distributional robustness motivated by robust statistics and functional analysis.

Action-Sufficient State Representation Learning for Control with Structural Constraints

no code implementations12 Oct 2021 Biwei Huang, Chaochao Lu, Liu Leqi, José Miguel Hernández-Lobato, Clark Glymour, Bernhard Schölkopf, Kun Zhang

Perceived signals in real-world scenarios are usually high-dimensional and noisy, and finding and using their representation that contains essential and sufficient information required by downstream decision-making tasks will help improve computational efficiency and generalization ability in the tasks.

Decision Making Representation Learning

You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory Prediction

no code implementations11 Oct 2021 Osama Makansi, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Dominik Janzing, Thomas Brox, Bernhard Schölkopf

Applying this procedure to state-of-the-art trajectory prediction methods on standard benchmark datasets shows that they are, in fact, unable to reason about interactions.

Trajectory Prediction

Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP

1 code implementation EMNLP 2021 Zhijing Jin, Julius von Kügelgen, Jingwei Ni, Tejas Vaidhya, Ayush Kaushal, Mrinmaya Sachan, Bernhard Schölkopf

The principle of independent causal mechanisms (ICM) states that generative processes of real world data consist of independent modules which do not influence or inform each other.

Causal Inference Domain Adaptation

Spatial Context Awareness for Unsupervised Change Detection in Optical Satellite Images

no code implementations5 Oct 2021 Lukas Kondmann, Aysim Toker, Sudipan Saha, Bernhard Schölkopf, Laura Leal-Taixé, Xiao Xiang Zhu

It uses this model to analyze differences in the pixel and its spatial context-based predictions in subsequent time periods for change detection.

Change Detection

Direct Advantage Estimation

no code implementations13 Sep 2021 Hsiao-Ru Pan, Nico Gürtler, Alexander Neitz, Bernhard Schölkopf

The predominant approach is to assign credit based on the expected return.

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

1 code implementation17 Jul 2021 Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.

Representation Learning

The Role of Pretrained Representations for the OOD Generalization of RL Agents

no code implementations12 Jul 2021 Andrea Dittadi, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

By training 240 representations and over 10, 000 reinforcement learning policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents.

Representation Learning

Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration

1 code implementation12 Jul 2021 Cian Eastwood, Ian Mason, Christopher K. I. Williams, Bernhard Schölkopf

Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain.

Domain Adaptation

Generalization and Robustness Implications in Object-Centric Learning

no code implementations1 Jul 2021 Andrea Dittadi, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole Winther, Francesco Locatello

The idea behind object-centric representation learning is that natural scenes can better be modeled as compositions of objects and their relations as opposed to distributed representations.

Representation Learning Systematic Generalization

Interventional Assays for the Latent Space of Autoencoders

no code implementations30 Jun 2021 Felix Leeb, Stefan Bauer, Bernhard Schölkopf

The encoders and decoders of autoencoders effectively project the input onto learned manifolds in the latent space and data space respectively.

Shallow Representation is Deep: Learning Uncertainty-aware and Worst-case Random Feature Dynamics

no code implementations24 Jun 2021 Diego Agudelo-España, Yassine Nemmour, Bernhard Schölkopf, Jia-Jie Zhu

Random features is a powerful universal function approximator that inherits the theoretical rigor of kernel methods and can scale up to modern learning tasks.

Real-time gravitational-wave science with neural posterior estimation

no code implementations23 Jun 2021 Maximilian Dax, Stephen R. Green, Jonathan Gair, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf

We demonstrate unprecedented accuracy for rapid gravitational-wave parameter estimation with deep learning.

Algorithmic Recourse in Partially and Fully Confounded Settings Through Bounding Counterfactual Effects

no code implementations22 Jun 2021 Julius von Kügelgen, Nikita Agarwal, Jakob Zeitler, Afsaneh Mastouri, Bernhard Schölkopf

Algorithmic recourse aims to provide actionable recommendations to individuals to obtain a more favourable outcome from an automated decision-making system.

Decision Making

Towards Total Recall in Industrial Anomaly Detection

7 code implementations15 Jun 2021 Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, Peter Gehler

Being able to spot defective parts is a critical component in large-scale industrial manufacturing.

Ranked #2 on Anomaly Detection on MVTec AD (using extra training data)

Anomaly Detection Outlier Detection

Adversarial Robustness through the Lens of Causality

no code implementations11 Jun 2021 Yonggang Zhang, Mingming Gong, Tongliang Liu, Gang Niu, Xinmei Tian, Bo Han, Bernhard Schölkopf, Kun Zhang

The spurious correlation implies that the adversarial distribution is constructed via making the statistical conditional association between style information and labels drastically different from that in natural distribution.

Adversarial Attack Adversarial Robustness

Instrument Space Selection for Kernel Maximum Moment Restriction

1 code implementation7 Jun 2021 Rui Zhang, Krikamol Muandet, Bernhard Schölkopf, Masaaki Imaizumi

Kernel maximum moment restriction (KMMR) recently emerges as a popular framework for instrumental variable (IV) based conditional moment restriction (CMR) models with important applications in conditional moment (CM) testing and parameter estimation for IV regression and proximal causal learning.

The Inductive Bias of Quantum Kernels

1 code implementation NeurIPS 2021 Jonas M. Kübler, Simon Buchholz, Bernhard Schölkopf

Quantum computers offer the possibility to efficiently compute inner products of exponentially large density operators that are classically hard to compute.

Diffusion-Based Representation Learning

no code implementations29 May 2021 Korbinian Abstreiter, Stefan Bauer, Bernhard Schölkopf, Arash Mehrjou

In contrast, the introduced diffusion based representation learning relies on a new formulation of the denoising score-matching objective and thus encodes information needed for denoising.

Denoising Representation Learning +1

DiBS: Differentiable Bayesian Structure Learning

1 code implementation NeurIPS 2021 Lars Lorch, Jonas Rothfuss, Bernhard Schölkopf, Andreas Krause

In this work, we propose a general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation.

Causal Discovery Variational Inference

Fast and Slow Learning of Recurrent Independent Mechanisms

no code implementations18 May 2021 Kanika Madan, Nan Rosemary Ke, Anirudh Goyal, Bernhard Schölkopf, Yoshua Bengio

To study these ideas, we propose a particular training framework in which we assume that the pieces of knowledge an agent needs and its reward function are stationary and can be re-used across tasks.

Meta-Learning

Regret Bounds for Gaussian-Process Optimization in Large Domains

1 code implementation NeurIPS 2021 Manuel Wüthrich, Bernhard Schölkopf, Andreas Krause

These regret bounds illuminate the relationship between the number of evaluations, the domain size (i. e. cardinality of finite domains / Lipschitz constant of the covariance function in continuous domains), and the optimality of the retrieved function value.

Pyfectious: An individual-level simulator to discover optimal containment polices for epidemic diseases

1 code implementation24 Mar 2021 Arash Mehrjou, Ashkan Soleymani, Amin Abyaneh, Samir Bhatt, Bernhard Schölkopf, Stefan Bauer

Simulating the spread of infectious diseases in human communities is critical for predicting the trajectory of an epidemic and verifying various policies to control the devastating impacts of the outbreak.

A prior-based approximate latent Riemannian metric

no code implementations9 Mar 2021 Georgios Arvanitidis, Bogdan Georgiev, Bernhard Schölkopf

In this work we propose a surrogate conformal Riemannian metric in the latent space of a generative model that is simple, efficient and robust.

Learning with Hyperspherical Uniformity

1 code implementation2 Mar 2021 Weiyang Liu, Rongmei Lin, Zhen Liu, Li Xiong, Bernhard Schölkopf, Adrian Weller

Due to the over-parameterization nature, neural networks are a powerful tool for nonlinear function approximation.

L2 Regularization

Nonlinear Invariant Risk Minimization: A Causal Approach

no code implementations24 Feb 2021 Chaochao Lu, Yuhuai Wu, Jośe Miguel Hernández-Lobato, Bernhard Schölkopf

Finally, in the discussion, we further explore the aforementioned assumption and propose a more general hypothesis, called the Agnostic Hypothesis: there exist a set of hidden causal factors affecting both inputs and outcomes.

Representation Learning

Finding Stable Matchings in PhD Markets with Consistent Preferences and Cooperative Partners

no code implementations23 Feb 2021 Maximilian Mordig, Riccardo Della Vecchia, Nicolò Cesa-Bianchi, Bernhard Schölkopf

Our setting is motivated by a PhD market of students, advisors, and co-advisors, and can be generalized to supply chain networks viewed as $n$-sided markets.

Computer Science and Game Theory Theoretical Economics Combinatorics

Conditional Distributional Treatment Effect with Kernel Conditional Mean Embeddings and U-Statistic Regression

no code implementations16 Feb 2021 Junhyung Park, Uri Shalit, Bernhard Schölkopf, Krikamol Muandet

We propose to analyse the conditional distributional treatment effect (CoDiTE), which, in contrast to the more common conditional average treatment effect (CATE), is designed to encode a treatment's distributional aspects beyond the mean.

Adversarially Robust Kernel Smoothing

1 code implementation16 Feb 2021 Jia-Jie Zhu, Christina Kouridi, Yassine Nemmour, Bernhard Schölkopf

We propose the adversarially robust kernel smoothing (ARKS) algorithm, combining kernel smoothing, robust optimization, and adversarial training for robust learning.

Bayesian Quadrature on Riemannian Data Manifolds

1 code implementation12 Feb 2021 Christian Fröhlich, Alexandra Gessner, Philipp Hennig, Bernhard Schölkopf, Georgios Arvanitidis

Riemannian manifolds provide a principled way to model nonlinear geometric structure inherent in data.

A Witness Two-Sample Test

no code implementations10 Feb 2021 Jonas M. Kübler, Wittawat Jitkrittum, Bernhard Schölkopf, Krikamol Muandet

Its statistic is given by the difference in expectations of the witness function, a real-valued function defined as a weighted sum of kernel evaluations on a set of basis points.

Two-sample testing

Dependency Structure Discovery from Interventions

no code implementations1 Jan 2021 Nan Rosemary Ke, Olexa Bilaniuk, Anirudh Goyal, Stefan Bauer, Bernhard Schölkopf, Michael Curtis Mozer, Hugo Larochelle, Christopher Pal, Yoshua Bengio

Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data.

Spatially Structured Recurrent Modules

no code implementations ICLR 2021 Nasim Rahaman, Anirudh Goyal, Muhammad Waleed Gondal, Manuel Wuthrich, Stefan Bauer, Yash Sharma, Yoshua Bengio, Bernhard Schölkopf

Capturing the structure of a data-generating process by means of appropriate inductive biases can help in learning models that generalise well and are robust to changes in the input distribution.

Video Prediction

Invariant Causal Representation Learning

no code implementations1 Jan 2021 Chaochao Lu, Yuhuai Wu, José Miguel Hernández-Lobato, Bernhard Schölkopf

As an alternative, we propose Invariant Causal Representation Learning (ICRL), a learning paradigm that enables out-of-distribution generalization in the nonlinear setting (i. e., nonlinear representations and nonlinear classifiers).

Representation Learning

Learning to interpret trajectories

no code implementations ICLR 2021 Alexander Neitz, Giambattista Parascandolo, Bernhard Schölkopf

By learning to predict trajectories of dynamical systems, model-based methods can make extensive use of all observations from past experience.

Learned residual Gerchberg-Saxton network for computer generated holography

no code implementations1 Jan 2021 Lennart Schlieder, Heiner Kremer, Valentin Volchkov, Kai Melde, Peer Fischer, Bernhard Schölkopf

Instead of an iterative optimization algorithm that converges to a (sub-)optimal solution, the inverse problem can be solved by training a neural network to directly estimate the inverse operator.

Assaying Large-scale Testing Models to Interpret COVID-19 Case Numbers

no code implementations3 Dec 2020 Michel Besserve, Simon Buchholz, Bernhard Schölkopf

Large-scale testing is considered key to assess the state of the current COVID-19 pandemic.

Applications Populations and Evolution

Causal analysis of Covid-19 Spread in Germany

no code implementations NeurIPS 2020 Atalanti Mastakouri, Bernhard Schölkopf

In this work, we study the causal relations among German regions in terms of the spread of Covid-19 since the beginning of the pandemic, taking into account the restriction policies that were applied by the different federal states.

Feature Selection Time Series

On the Transfer of Disentangled Representations in Realistic Settings

no code implementations ICLR 2021 Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, Bernhard Schölkopf

Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning.

Representation Learning

A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation

no code implementations27 Oct 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The idea behind the \emph{unsupervised} learning of \emph{disentangled} representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Predicting Infectiousness for Proactive Contact Tracing

1 code implementation ICLR 2021 Yoshua Bengio, Prateek Gupta, Tegan Maharaj, Nasim Rahaman, Martin Weiss, Tristan Deleu, Eilif Muller, Meng Qu, Victor Schmidt, Pierre-Luc St-Charles, Hannah Alsdurf, Olexa Bilanuik, David Buckeridge, Gáetan Marceau Caron, Pierre-Luc Carrier, Joumana Ghosn, Satya Ortiz-Gagne, Chris Pal, Irina Rish, Bernhard Schölkopf, Abhinav Sharma, Jian Tang, Andrew Williams

Predictions are used to provide personalized recommendations to the individual via an app, as well as to send anonymized messages to the individual's contacts, who use this information to better predict their own infectiousness, an approach we call proactive contact tracing (PCT).

Maximum Moment Restriction for Instrumental Variable Regression

1 code implementation15 Oct 2020 Rui Zhang, Masaaki Imaizumi, Bernhard Schölkopf, Krikamol Muandet

We propose a simple framework for nonlinear instrumental variable (IV) regression based on a kernelized conditional moment restriction (CMR) known as a maximum moment restriction (MMR).

Function Contrastive Learning of Transferable Meta-Representations

no code implementations14 Oct 2020 Muhammad Waleed Gondal, Shruti Joshi, Nasim Rahaman, Stefan Bauer, Manuel Wüthrich, Bernhard Schölkopf

This \emph{meta-representation}, which is computed from a few observed examples of the underlying function, is learned jointly with the predictive model.

Contrastive Learning Few-Shot Learning

Physically constrained causal noise models for high-contrast imaging of exoplanets

no code implementations12 Oct 2020 Timothy D. Gebhard, Markus J. Bonse, Sascha P. Quanz, Bernhard Schölkopf

The detection of exoplanets in high-contrast imaging (HCI) data hinges on post-processing methods to remove spurious light from the host star.

CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and Transfer Learning

no code implementations ICLR 2021 Ossama Ahmed, Frederik Träuble, Anirudh Goyal, Alexander Neitz, Yoshua Bengio, Bernhard Schölkopf, Manuel Wüthrich, Stefan Bauer

To facilitate research addressing this problem, we propose CausalWorld, a benchmark for causal structure and transfer learning in a robotic manipulation environment.

Transfer Learning

A survey of algorithmic recourse: definitions, formulations, solutions, and prospects

no code implementations8 Oct 2020 Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, Isabel Valera

Machine learning is increasingly used to inform decision-making in sensitive situations where decisions have consequential effects on individuals' lives.

Decision Making Fairness

Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning

1 code implementation7 Oct 2020 Sumedh A. Sontakke, Arash Mehrjou, Laurent Itti, Bernhard Schölkopf

Inspired by this, we attempt to equip reinforcement learning agents with the ability to perform experiments that facilitate a categorization of the rolled-out trajectories, and to subsequently infer the causal factors of the environment in a hierarchical manner.

Representation Learning Zero-Shot Learning

Function Contrastive Learning of Transferable Representations

no code implementations28 Sep 2020 Muhammad Waleed Gondal, Shruti Joshi, Nasim Rahaman, Stefan Bauer, Manuel Wuthrich, Bernhard Schölkopf

Few-shot-learning seeks to find models that are capable of fast-adaptation to novel tasks which are not encountered during training.

Contrastive Learning Few-Shot Learning

Learning explanations that are hard to vary

3 code implementations ICLR 2021 Giambattista Parascandolo, Alexander Neitz, Antonio Orvieto, Luigi Gresele, Bernhard Schölkopf

In this paper, we investigate the principle that `good explanations are hard to vary' in the context of deep learning.

Real-time Prediction of COVID-19 related Mortality using Electronic Health Records

no code implementations31 Aug 2020 Patrick Schwab, Arash Mehrjou, Sonali Parbhoo, Leo Anthony Celi, Jürgen Hetzel, Markus Hofer, Bernhard Schölkopf, Stefan Bauer

Coronavirus Disease 2019 (COVID-19) is an emerging respiratory disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) with rapid human-to-human transmission and a high case fatality rate particularly in older patients.

Learning Dynamical Systems using Local Stability Priors

no code implementations23 Aug 2020 Arash Mehrjou, Andrea Iannelli, Bernhard Schölkopf

A coupled computational approach to simultaneously learn a vector field and the region of attraction of an equilibrium point from generated trajectories of the system is proposed.

Geometrically Enriched Latent Spaces

no code implementations2 Aug 2020 Georgios Arvanitidis, Søren Hauberg, Bernhard Schölkopf

A common assumption in generative models is that the generator immerses the latent space into a Euclidean ambient space.

A Commentary on the Unsupervised Learning of Disentangled Representations

no code implementations28 Jul 2020 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.

S2RMs: Spatially Structured Recurrent Modules

no code implementations13 Jul 2020 Nasim Rahaman, Anirudh Goyal, Muhammad Waleed Gondal, Manuel Wuthrich, Stefan Bauer, Yash Sharma, Yoshua Bengio, Bernhard Schölkopf

Capturing the structure of a data-generating process by means of appropriate inductive biases can help in learning models that generalize well and are robust to changes in the input distribution.

Video Prediction

Causal Feature Selection via Orthogonal Search

no code implementations6 Jul 2020 Anant Raj, Stefan Bauer, Ashkan Soleymani, Michel Besserve, Bernhard Schölkopf

The problem of inferring the direct causal parents of a response variable among a large set of explanatory variables is of high practical importance in many disciplines.

Causal Discovery Feature Selection

Metrizing Weak Convergence with Maximum Mean Discrepancies

no code implementations16 Jun 2020 Carl-Johann Simon-Gabriel, Alessandro Barp, Bernhard Schölkopf, Lester Mackey

More precisely, we prove that, on a locally compact, non-compact, Hausdorff space, the MMD of a bounded continuous Borel measurable kernel k, whose reproducing kernel Hilbert space (RKHS) functions vanish at infinity, metrizes the weak convergence of probability measures if and only if k is continuous and integrally strictly positive definite (i. s. p. d.)

Kernel Distributionally Robust Optimization

2 code implementations12 Jun 2020 Jia-Jie Zhu, Wittawat Jitkrittum, Moritz Diehl, Bernhard Schölkopf

We prove a theorem that generalizes the classical duality in the mathematical problem of moments.

Stochastic Optimization

Algorithmic recourse under imperfect causal knowledge: a probabilistic approach

1 code implementation NeurIPS 2020 Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, Isabel Valera

Recent work has discussed the limitations of counterfactual explanations to recommend actions for algorithmic recourse, and argued for the need of taking causal relationships between features into consideration.

Learning to Play Table Tennis From Scratch using Muscular Robots

no code implementations10 Jun 2020 Dieter Büchler, Simon Guist, Roberto Calandra, Vincent Berenz, Bernhard Schölkopf, Jan Peters

This work is the first to (a) fail-safe learn of a safety-critical dynamic task using anthropomorphic robot arms, (b) learn a precision-demanding problem with a PAM-driven system despite the control challenges and (c) train robots to play table tennis without real balls.

Neural Lyapunov Redesign

1 code implementation6 Jun 2020 Arash Mehrjou, Mohammad Ghavamzadeh, Bernhard Schölkopf

We provide theoretical results on the class of systems that can be treated with the proposed algorithm and empirically evaluate the effectiveness of our method using an exemplary dynamical system.

Learning Kernel Tests Without Data Splitting

1 code implementation NeurIPS 2020 Jonas M. Kübler, Wittawat Jitkrittum, Bernhard Schölkopf, Krikamol Muandet

Modern large-scale kernel-based tests such as maximum mean discrepancy (MMD) and kernelized Stein discrepancy (KSD) optimize kernel hyperparameters on a held-out sample via data splitting to obtain the most powerful test statistics.

A machine learning route between band mapping and band structure

1 code implementation20 May 2020 Rui Patrick Xian, Vincent Stimper, Marios Zacharias, Shuo Dong, Maciej Dendzik, Samuel Beaulieu, Bernhard Schölkopf, Martin Wolf, Laurenz Rettig, Christian Carbogno, Stefan Bauer, Ralph Ernstorfer

A common task in photoemission band mapping is to recover the underlying quasiparticle dispersion, which we call band structure reconstruction.

Data Analysis, Statistics and Probability Materials Science Computational Physics

Necessary and sufficient conditions for causal feature selection in time series with latent common causes

no code implementations18 May 2020 Atalanti A. Mastakouri, Bernhard Schölkopf, Dominik Janzing

We study the identification of direct and indirect causes on time series and provide conditions in the presence of latent variables, which we prove to be necessary and sufficient under some graph constraints.

Feature Selection Time Series

Simpson's paradox in Covid-19 case fatality rates: a mediation analysis of age-related causal effects

1 code implementation14 May 2020 Julius von Kügelgen, Luigi Gresele, Bernhard Schölkopf

We point out limitations and extensions for future work, and, finally, discuss the role of causal reasoning in the broader context of using AI to combat the Covid-19 pandemic.

Applications Methodology

Crackovid: Optimizing Group Testing

no code implementations13 May 2020 Louis Abraham, Gary Bécigneul, Bernhard Schölkopf

We study the problem usually referred to as group testing in the context of COVID-19.

Disentangling Factors of Variations Using Few Labels

no code implementations ICLR Workshop LLD 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Model Selection Representation Learning

Towards causal generative scene models via competition of experts

no code implementations27 Apr 2020 Julius von Kügelgen, Ivan Ustyuzhaninov, Peter Gehler, Matthias Bethge, Bernhard Schölkopf

Learning how to model complex scenes in a modular way with recombinable components is a pre-requisite for higher-order reasoning and acting in the physical world.

Quantifying the Effects of Contact Tracing, Testing, and Containment Measures in the Presence of Infection Hotspots

2 code implementations15 Apr 2020 Lars Lorch, Heiner Kremer, William Trouleau, Stratis Tsirtsis, Aron Szanto, Bernhard Schölkopf, Manuel Gomez-Rodriguez

Multiple lines of evidence strongly suggest that infection hotspots, where a single individual infects many others, play a key role in the transmission dynamics of COVID-19.

Point Processes

A theory of independent mechanisms for extrapolation in generative models

no code implementations1 Apr 2020 Michel Besserve, Rémy Sun, Dominik Janzing, Bernhard Schölkopf

Generative models can be trained to emulate complex empirical data, but are they useful to make predictions in the context of previously unobserved environments?

Worst-Case Risk Quantification under Distributional Ambiguity using Kernel Mean Embedding in Moment Problem

no code implementations31 Mar 2020 Jia-Jie Zhu, Wittawat Jitkrittum, Moritz Diehl, Bernhard Schölkopf

In order to anticipate rare and impactful events, we propose to quantify the worst-case risk under distributional ambiguity using a recent development in kernel methods -- the kernel mean embedding.

SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for Gaussian Process Regression with Derivatives

1 code implementation5 Mar 2020 Emmanouil Angelis, Philippe Wenk, Bernhard Schölkopf, Stefan Bauer, Andreas Krause

Gaussian processes are an important regression tool with excellent analytic properties which allow for direct integration of derivative observations.

Gaussian Processes

MYND: Unsupervised Evaluation of Novel BCI Control Strategies on Consumer Hardware

1 code implementation26 Feb 2020 Matthias R. Hohmann, Lisa Konieczny, Michelle Hackl, Brian Wirth, Talha Zaman, Raffi Enficiaud, Moritz Grosse-Wentrup, Bernhard Schölkopf

We introduce MYND: A framework that couples consumer-grade recording hardware with an easy-to-use application for the unsupervised evaluation of BCI control strategies.

Human-Computer Interaction Neurons and Cognition 68U35 H.5.2

Testing Goodness of Fit of Conditional Density Models with Kernels

1 code implementation24 Feb 2020 Wittawat Jitkrittum, Heishiro Kanagawa, Bernhard Schölkopf

We propose two nonparametric statistical tests of goodness of fit for conditional distributions: given a conditional probability density function $p(y|x)$ and a joint sample, decide whether the sample is drawn from $p(y|x)r_x(x)$ for some density $r_x$.

Two-sample testing

Algorithmic Recourse: from Counterfactual Explanations to Interventions

1 code implementation14 Feb 2020 Amir-Hossein Karimi, Bernhard Schölkopf, Isabel Valera

As machine learning is increasingly used to inform consequential decision-making (e. g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision.

Decision Making

Weakly-Supervised Disentanglement Without Compromises

2 code implementations ICML 2020 Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen

Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.

Fairness

Selecting causal brain features with a single conditional independence test per feature

no code implementations NeurIPS 2019 Atalanti Mastakouri, Bernhard Schölkopf, Dominik Janzing

We propose a constraint-based causal feature selection method for identifying causes of a given target variable, selecting from a set of candidate variables, while there can also be hidden variables acting as common causes with the target.

Feature Selection

Perceiving the arrow of time in autoregressive motion

no code implementations NeurIPS 2019 Kristof Meding, Dominik Janzing, Bernhard Schölkopf, Felix A. Wichmann

We employ a so-called frozen noise paradigm enabling us to compare human performance with four different algorithms on a trial-by-trial basis: A causal inference algorithm exploiting the dependence structure of additive noise terms, a neurally inspired network, a Bayesian ideal observer model as well as a simple heuristic.

Causal Inference Time Series

Causality for Machine Learning

no code implementations24 Nov 2019 Bernhard Schölkopf

Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning.

Causal Inference

Kernel-Guided Training of Implicit Generative Models with Stability Guarantees

no code implementations29 Oct 2019 Arash Mehrjou, Wittawat Jitkrittum, Krikamol Muandet, Bernhard Schölkopf

Modern implicit generative models such as generative adversarial networks (GANs) are generally known to suffer from issues such as instability, uninterpretability, and difficulty in assessing their performance.

Kernel Stein Tests for Multiple Model Comparison

3 code implementations NeurIPS 2019 Jen Ning Lim, Makoto Yamada, Bernhard Schölkopf, Wittawat Jitkrittum

The first test, building on the post selection inference framework, provably controls the number of best models that are wrongly declared worse (false positive rate).

Learning Neural Causal Models from Unknown Interventions

2 code implementations2 Oct 2019 Nan Rosemary Ke, Olexa Bilaniuk, Anirudh Goyal, Stefan Bauer, Hugo Larochelle, Bernhard Schölkopf, Michael C. Mozer, Chris Pal, Yoshua Bengio

Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data.

Meta-Learning

Recurrent Independent Mechanisms

4 code implementations ICLR 2021 Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf

Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes.

Multidimensional Contrast Limited Adaptive Histogram Equalization

1 code implementation26 Jun 2019 Vincent Stimper, Stefan Bauer, Ralph Ernstorfer, Bernhard Schölkopf, R. Patrick Xian

Contrast enhancement is an important preprocessing technique for improving the performance of downstream tasks in image processing and computer vision.

Disentangled State Space Representations

no code implementations7 Jun 2019 Đorđe Miladinović, Muhammad Waleed Gondal, Bernhard Schölkopf, Joachim M. Buhmann, Stefan Bauer

Sequential data often originates from diverse domains across which statistical regularities and domain specifics exist.

Transfer Learning

On the Fairness of Disentangled Representations

no code implementations NeurIPS 2019 Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer, Bernhard Schölkopf, Olivier Bachem

Recently there has been a significant interest in learning disentangled representations, as they promise increased interpretability, generalization to unseen scenarios and faster learning on downstream tasks.

Fairness

Quantum Mean Embedding of Probability Distributions

no code implementations31 May 2019 Jonas M. Kübler, Krikamol Muandet, Bernhard Schölkopf

The kernel mean embedding of probability distributions is commonly used in machine learning as an injective mapping from distributions to functions in an infinite dimensional Hilbert space.

Semi-Supervised Learning, Causality and the Conditional Cluster Assumption

1 code implementation28 May 2019 Julius von Kügelgen, Alexander Mey, Marco Loog, Bernhard Schölkopf

While the success of semi-supervised learning (SSL) is still not fully understood, Sch\"olkopf et al. (2012) have established a link to the principle of independent causal mechanisms.

Optimal Decision Making Under Strategic Behavior

1 code implementation22 May 2019 Stratis Tsirtsis, Behzad Tabibian, Moein Khajehnejad, Adish Singla, Bernhard Schölkopf, Manuel Gomez-Rodriguez

Using this characterization, we first show that, in general, we cannot expect to find optimal decision policies in polynomial time and there are cases in which deterministic policies are suboptimal.

Decision Making

The Incomplete Rosetta Stone Problem: Identifiability Results for Multi-View Nonlinear ICA

no code implementations16 May 2019 Luigi Gresele, Paul K. Rubenstein, Arash Mehrjou, Francesco Locatello, Bernhard Schölkopf

In contrast to known identifiability results for nonlinear ICA, we prove that independent latent sources with arbitrary mixing can be recovered as long as multiple, sufficiently different noisy views are available.

Kernel Mean Matching for Content Addressability of GANs

1 code implementation14 May 2019 Wittawat Jitkrittum, Patsorn Sangkloy, Muhammad Waleed Gondal, Amit Raj, James Hays, Bernhard Schölkopf

We propose a novel procedure which adds "content-addressability" to any given unconditional implicit model e. g., a generative adversarial network (GAN).

Image Generation

Consequential Ranking Algorithms and Long-term Welfare

no code implementations13 May 2019 Behzad Tabibian, Vicenç Gómez, Abir De, Bernhard Schölkopf, Manuel Gomez Rodriguez

Can we design ranking models that understand the consequences of their proposed rankings and, more importantly, are able to avoid the undesirable ones?

Misinformation

Disentangling Factors of Variation Using Few Labels

no code implementations3 May 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Model Selection Representation Learning

Adversarial Vulnerability of Neural Networks Increases with Input Dimension

no code implementations ICLR 2019 Carl-Johann Simon-Gabriel, Yann Ollivier, Léon Bottou, Bernhard Schölkopf, David Lopez-Paz

Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.

Convolutional neural networks: a magic bullet for gravitational-wave detection?

2 code implementations18 Apr 2019 Timothy D. Gebhard, Niki Kilbertus, Ian Harry, Bernhard Schölkopf

In the last few years, machine learning techniques, in particular convolutional neural networks, have been investigated as a method to replace or complement traditional matched filtering techniques that are used to detect the gravitational-wave signature of merging black holes.

Gravitational Wave Detection

From Variational to Deterministic Autoencoders

3 code implementations ICLR 2020 Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Schölkopf

Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models.

Density Estimation

Learning from Samples of Variable Quality

no code implementations ICLR Workshop LLD 2019 Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf

Training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing.

Orthogonal Structure Search for Efficient Causal Discovery from Observational Data

no code implementations6 Mar 2019 Anant Raj, Luigi Gresele, Michel Besserve, Bernhard Schölkopf, Stefan Bauer

The problem of inferring the direct causal parents of a response variable among a large set of explanatory variables is of high practical importance in many disciplines.

Causal Discovery

Causal Discovery from Heterogeneous/Nonstationary Data with Independent Changes

no code implementations5 Mar 2019 Biwei Huang, Kun Zhang, Jiji Zhang, Joseph Ramsey, Ruben Sanchez-Romero, Clark Glymour, Bernhard Schölkopf

In this paper, we develop a framework for causal discovery from such data, called Constraint-based causal Discovery from heterogeneous/NOnstationary Data (CD-NOD), to find causal skeleton and directions and estimate the properties of mechanism changes.

Causal Discovery

ODIN: ODE-Informed Regression for Parameter and State Inference in Time-Continuous Dynamical Systems

2 code implementations17 Feb 2019 Philippe Wenk, Gabriele Abbati, Michael A. Osborne, Bernhard Schölkopf, Andreas Krause, Stefan Bauer

Parameter inference in ordinary differential equations is an important problem in many applied sciences and in engineering, especially in a data-scarce setting.

Gaussian Processes Model Selection

Bayesian Online Prediction of Change Points

1 code implementation12 Feb 2019 Diego Agudelo-España, Sebastian Gomez-Gonzalez, Stefan Bauer, Bernhard Schölkopf, Jan Peters

Online detection of instantaneous changes in the generative process of a data sequence generally focuses on retrospective inference of such change points without considering their future occurrences.

Bayesian Inference Change Point Detection

Fair Decisions Despite Imperfect Predictions

1 code implementation8 Feb 2019 Niki Kilbertus, Manuel Gomez-Rodriguez, Bernhard Schölkopf, Krikamol Muandet, Isabel Valera

In this paper, we show that in this selective labels setting, learning a predictor directly only from available labeled data is suboptimal in terms of both fairness and utility.

Causal Inference Decision Making +1

GeNet: Deep Representations for Metagenomics

2 code implementations30 Jan 2019 Mateo Rojas-Carulla, Ilya Tolstikhin, Guillermo Luque, Nicholas Youngblut, Ruth Ley, Bernhard Schölkopf

We introduce GeNet, a method for shotgun metagenomic classification from raw DNA sequences that exploits the known hierarchical structure between labels for training.

General Classification

Kernel-Guided Training of Implicit Generative Models with Stability Guarantees

no code implementations26 Jan 2019 Arash Mehrjou, Wittawat Jitkrittum, Krikamol Muandet, Bernhard Schölkopf

Modern implicit generative models such as generative adversarial networks (GANs) are generally known to suffer from issues such as instability, uninterpretability, and difficulty in assessing their performance.

Deconfounding Reinforcement Learning in Observational Settings

1 code implementation26 Dec 2018 Chaochao Lu, Bernhard Schölkopf, José Miguel Hernández-Lobato

Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data.

OpenAI Gym

Counterfactuals uncover the modular structure of deep generative models

no code implementations ICLR 2020 Michel Besserve, Arash Mehrjou, Rémy Sun, Bernhard Schölkopf

Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data.

Style Transfer

Generalization in anti-causal learning

no code implementations3 Dec 2018 Niki Kilbertus, Giambattista Parascandolo, Bernhard Schölkopf

Anti-causal models are used to drive this search, but a causal model is required for validation.

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

6 code implementations ICML 2019 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Representation Learning

Deep Nonlinear Non-Gaussian Filtering for Dynamical Systems

no code implementations14 Nov 2018 Arash Mehrjou, Bernhard Schölkopf

Filtering is a general name for inferring the states of a dynamical system given observations.

Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness

no code implementations31 Oct 2018 Raphael Suter, Đorđe Miladinović, Bernhard Schölkopf, Stefan Bauer

The ability to learn disentangled representations that split underlying sources of variation in high dimensional, unstructured data is important for data efficient and robust use of neural networks.

Representation Learning

Informative Features for Model Comparison

3 code implementations NeurIPS 2018 Wittawat Jitkrittum, Heishiro Kanagawa, Patsorn Sangkloy, James Hays, Bernhard Schölkopf, Arthur Gretton

Given two candidate models, and a set of target observations, we address the problem of measuring the relative goodness of fit of the two models.

Adaptation and Robust Learning of Probabilistic Movement Primitives

1 code implementation31 Aug 2018 Sebastian Gomez-Gonzalez, Gerhard Neumann, Bernhard Schölkopf, Jan Peters

However, to be able to capture variability and correlations between different joints, a probabilistic movement primitive requires the estimation of a larger number of parameters compared to their deterministic counterparts, that focus on modeling only the mean behavior.

The Unreasonable Effectiveness of Texture Transfer for Single Image Super-resolution

1 code implementation31 Jul 2018 Muhammad Waleed Gondal, Bernhard Schölkopf, Michael Hirsch

Moreover, we show that a texture representation of those deep features better capture the perceptual quality of an image than the original deep features.

General Classification Image Reconstruction +1

Perceptual Video Super Resolution with Enhanced Temporal Consistency

no code implementations20 Jul 2018 Eduardo Pérez-Pellitero, Mehdi S. M. Sajjadi, Michael Hirsch, Bernhard Schölkopf

Together with a video discriminator, we also propose additional loss functions to further reinforce temporal consistency in the generated sequences.

Image Super-Resolution Video Super-Resolution

Robustifying Independent Component Analysis by Adjusting for Group-Wise Stationary Noise

3 code implementations4 Jun 2018 Niklas Pfister, Sebastian Weichwald, Peter Bühlmann, Bernhard Schölkopf

We introduce coroICA, confounding-robust independent component analysis, a novel ICA algorithm which decomposes linearly mixed multivariate observations into independent components that are corrupted (and rendered dependent) by hidden group-wise stationary confounding.

Causal Inference EEG

A Local Information Criterion for Dynamical Systems

no code implementations27 May 2018 Arash Mehrjou, Friedrich Solowjow, Sebastian Trimpe, Bernhard Schölkopf

Apart from its application for encoding a sequence of observations, we propose to use the compression achieved by this encoding as a criterion for model selection.

Model Selection

Deep Energy Estimator Networks

1 code implementation21 May 2018 Saeed Saremi, Arash Mehrjou, Bernhard Schölkopf, Aapo Hyvärinen

We present the utility of DEEN in learning the energy, the score function, and in single-step denoising experiments for synthetic and high-dimensional data.

Denoising Density Estimation

Automatic Estimation of Modulation Transfer Functions

1 code implementation4 May 2018 Matthias Bauer, Valentin Volchkov, Michael Hirsch, Bernhard Schölkopf

The modulation transfer function (MTF) is widely used to characterise the performance of optical systems.

Competitive Training of Mixtures of Independent Deep Generative Models

no code implementations30 Apr 2018 Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf

A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.

On Matching Pursuit and Coordinate Descent

no code implementations ICML 2018 Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi

Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives.

Coordinating users of shared facilities via data-driven predictive assistants and game theory

no code implementations16 Mar 2018 Philipp Geiger, Michel Besserve, Justus Winkelmann, Claudius Proissl, Bernhard Schölkopf

We study data-driven assistants that provide congestion forecasts to users of shared facilities (roads, cafeterias, etc.

Time Series

Adversarial Extreme Multi-label Classification

1 code implementation5 Mar 2018 Rohit Babbar, Bernhard Schölkopf

The goal in extreme multi-label classification is to learn a classifier which can assign a small subset of relevant labels to an instance from an extremely large set of target labels.

Extreme Multi-Label Classification General Classification +1

Analysis of cause-effect inference by comparing regression errors

no code implementations19 Feb 2018 Patrick Blöbaum, Dominik Janzing, Takashi Washio, Shohei Shimizu, Bernhard Schölkopf

We address the problem of inferring the causal direction between two variables by comparing the least-squares errors of the predictions in both possible directions.

Causal Inference

Tempered Adversarial Networks

no code implementations ICML 2018 Mehdi S. M. Sajjadi, Giambattista Parascandolo, Arash Mehrjou, Bernhard Schölkopf

A possible explanation for training instabilities is the inherent imbalance between the networks: While the discriminator is trained directly on both real and fake samples, the generator only has control over the fake samples it produces since the real data distribution is fixed by the choice of a given dataset.

First-order Adversarial Vulnerability of Neural Networks and Input Dimension

1 code implementation ICLR 2019 Carl-Johann Simon-Gabriel, Yann Ollivier, Léon Bottou, Bernhard Schölkopf, David Lopez-Paz

Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.

Learning Independent Causal Mechanisms

1 code implementation ICML 2018 Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, Bernhard Schölkopf

The approach is unsupervised and based on a set of experts that compete for data generated by the mechanisms, driving specialization.

Transfer Learning

Fidelity-Weighted Learning

no code implementations ICLR 2018 Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf

To this end, we propose "fidelity-weighted learning" (FWL), a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data.

Ad-Hoc Information Retrieval Information Retrieval

Differentially Private Database Release via Kernel Mean Embeddings

1 code implementation ICML 2018 Matej Balog, Ilya Tolstikhin, Bernhard Schölkopf

First, releasing (an estimate of) the kernel mean embedding of the data generating random variable instead of the database itself still allows third-parties to construct consistent estimators of a wide class of population statistics.

Learning Blind Motion Deblurring

1 code implementation ICCV 2017 Patrick Wieschollek, Michael Hirsch, Bernhard Schölkopf, Hendrik P. A. Lensch

As handheld video cameras are now commonplace and available in every smartphone, images and videos can be recorded almost everywhere at anytime.

Deblurring

Avoiding Discrimination through Causal Reasoning

no code implementations NeurIPS 2017 Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf

Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning.

Fairness

Annealed Generative Adversarial Networks

no code implementations21 May 2017 Arash Mehrjou, Bernhard Schölkopf, Saeed Saremi

We introduce a novel framework for adversarial training where the target distribution is annealed between the uniform distribution and the data distribution.

Group invariance principles for causal generative models

no code implementations5 May 2017 Michel Besserve, Naji Shajarisales, Bernhard Schölkopf, Dominik Janzing

The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms.

Causal Discovery

Discriminative Transfer Learning for General Image Restoration

no code implementations27 Mar 2017 Lei Xiao, Felix Heide, Wolfgang Heidrich, Bernhard Schölkopf, Michael Hirsch

Recently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing trade-off between image quality and computational efficiency.

Deblurring Demosaicking +2

AdaGAN: Boosting Generative Models

1 code implementation NeurIPS 2017 Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, Bernhard Schölkopf

Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images.

Local Group Invariant Representations via Orbit Embeddings

no code implementations6 Dec 2016 Anant Raj, Abhishek Kumar, Youssef Mroueh, P. Thomas Fletcher, Bernhard Schölkopf

We consider transformations that form a \emph{group} and propose an approach based on kernel methods to derive local group invariant representations.

Rotated MNIST

Minimax Estimation of Maximum Mean Discrepancy with Radial Kernels

no code implementations NeurIPS 2016 Ilya O. Tolstikhin, Bharath K. Sriperumbudur, Bernhard Schölkopf

Maximum Mean Discrepancy (MMD) is a distance on the space of probability measures which has found numerous applications in machine learning and nonparametric testing.

Distilling Information Reliability and Source Trustworthiness from Digital Traces

no code implementations24 Oct 2016 Behzad Tabibian, Isabel Valera, Mehrdad Farajtabar, Le Song, Bernhard Schölkopf, Manuel Gomez-Rodriguez

Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness.

Screening Rules for Convex Problems

no code implementations23 Sep 2016 Anant Raj, Jakob Olbrich, Bernd Gärtner, Bernhard Schölkopf, Martin Jaggi

We propose a new framework for deriving screening rules for convex optimization problems.

Depth Estimation Through a Generative Model of Light Field Synthesis

no code implementations6 Sep 2016 Mehdi S. M. Sajjadi, Rolf Köhler, Bernhard Schölkopf, Michael Hirsch

Light field photography captures rich structural information that may facilitate a number of traditional image processing and computer vision tasks.

Depth Estimation

Experimental and causal view on information integration in autonomous agents

no code implementations14 Jun 2016 Philipp Geiger, Katja Hofmann, Bernhard Schölkopf

The amount of digitally available but heterogeneous information about the world is remarkable, and new technologies such as self-driving cars, smart homes, or the internet of things may further increase it.

Decision Making Self-Driving Cars +1

Kernel Mean Embedding of Distributions: A Review and Beyond

no code implementations31 May 2016 Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Bernhard Schölkopf

Next, we discuss the Hilbert space embedding for conditional distributions, give theoretical insights, and review some applications.

Causal Discovery Two-sample testing