3 code implementations • ICLR 2021 • Giambattista Parascandolo, Alexander Neitz, Antonio Orvieto, Luigi Gresele, Bernhard Schölkopf
In this paper, we investigate the principle that `good explanations are hard to vary' in the context of deep learning.
18 code implementations • CVPR 2022 • Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, Peter Gehler
Being able to spot defective parts is a critical component in large-scale industrial manufacturing.
Ranked #3 on Anomaly Detection on AeBAD-V
1 code implementation • 10 Nov 2023 • Weiyang Liu, Zeju Qiu, Yao Feng, Yuliang Xiu, Yuxuan Xue, Longhui Yu, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf
We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT).
1 code implementation • 27 Jan 2023 • Flavio Schneider, Ojasv Kamal, Zhijing Jin, Bernhard Schölkopf
Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another "language" of communication -- music.
4 code implementations • ICLR 2020 • Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Schölkopf
Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models.
8 code implementations • ICML 2019 • Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.
3 code implementations • ICML 2020 • Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen
Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.
2 code implementations • 20 Jul 2022 • Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed Hassani, George J. Pappas, Bernhard Schölkopf
By minimizing the $\alpha$-quantile of predictor's risk distribution over domains, QRM seeks predictors that perform well with probability $\alpha$.
3 code implementations • 3 Aug 2022 • Laurence Illing Midgley, Vincent Stimper, Gregor N. C. Simm, Bernhard Schölkopf, José Miguel Hernández-Lobato
Normalizing flows are tractable density models that can approximate complicated target distributions, e. g. Boltzmann distributions of physical systems.
1 code implementation • 26 Jan 2023 • Vincent Stimper, David Liu, Andrew Campbell, Vincent Berenz, Lukas Ryll, Bernhard Schölkopf, José Miguel Hernández-Lobato
It allows to build normalizing flow models from a suite of base distributions, flow layers, and neural networks.
2 code implementations • ICCV 2023 • Yandong Wen, Weiyang Liu, Yao Feng, Bhiksha Raj, Rita Singh, Adrian Weller, Michael J. Black, Bernhard Schölkopf
In this paper, we focus on a general yet important learning problem, pairwise similarity learning (PSL).
1 code implementation • ICLR 2021 • Ossama Ahmed, Frederik Träuble, Anirudh Goyal, Alexander Neitz, Yoshua Bengio, Bernhard Schölkopf, Manuel Wüthrich, Stefan Bauer
To facilitate research addressing this problem, we propose CausalWorld, a benchmark for causal structure and transfer learning in a robotic manipulation environment.
4 code implementations • ICCV 2017 • Mehdi S. M. Sajjadi, Bernhard Schölkopf, Michael Hirsch
Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input.
3 code implementations • ICLR 2021 • Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf
Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes.
1 code implementation • ICCV 2017 • Patrick Wieschollek, Michael Hirsch, Bernhard Schölkopf, Hendrik P. A. Lensch
As handheld video cameras are now commonplace and available in every smartphone, images and videos can be recorded almost everywhere at anytime.
2 code implementations • 2 Oct 2019 • Nan Rosemary Ke, Olexa Bilaniuk, Anirudh Goyal, Stefan Bauer, Hugo Larochelle, Bernhard Schölkopf, Michael C. Mozer, Chris Pal, Yoshua Bengio
Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data.
1 code implementation • 6 Sep 2021 • Nino Scherrer, Olexa Bilaniuk, Yashas Annadani, Anirudh Goyal, Patrick Schwab, Bernhard Schölkopf, Michael C. Mozer, Yoshua Bengio, Stefan Bauer, Nan Rosemary Ke
Discovering causal structures from data is a challenging inference problem of fundamental importance in all areas of science.
2 code implementations • 14 Feb 2020 • Amir-Hossein Karimi, Bernhard Schölkopf, Isabel Valera
As machine learning is increasingly used to inform consequential decision-making (e. g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision.
1 code implementation • NeurIPS 2023 • Zhijing Jin, Yuen Chen, Felix Leeb, Luigi Gresele, Ojasv Kamal, Zhiheng Lyu, Kevin Blin, Fernando Gonzalez Adauto, Max Kleiman-Weiner, Mrinmaya Sachan, Bernhard Schölkopf
Much of the existing work in natural language processing (NLP) focuses on evaluating commonsense causal reasoning in LLMs, thus failing to assess whether a model can perform causal inference in accordance with a set of well-defined formal rules.
1 code implementation • NeurIPS 2017 • Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, Bernhard Schölkopf
Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images.
2 code implementations • 15 Apr 2020 • Lars Lorch, Heiner Kremer, William Trouleau, Stratis Tsirtsis, Aron Szanto, Bernhard Schölkopf, Manuel Gomez-Rodriguez
Multiple lines of evidence strongly suggest that infection hotspots, where a single individual infects many others, play a key role in the transmission dynamics of COVID-19.
1 code implementation • 9 Jun 2023 • Zhijing Jin, Jiarui Liu, Zhiheng Lyu, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona Diab, Bernhard Schölkopf
In this work, we propose the first benchmark dataset to test the pure causal inference skills of large language models (LLMs).
4 code implementations • NeurIPS 2019 • Muhammad Waleed Gondal, Manuel Wüthrich, Đorđe Miladinović, Francesco Locatello, Martin Breidt, Valentin Volchkov, Joel Akpo, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer
Learning meaningful and compact representations with disentangled semantic aspects is considered to be of key importance in representation learning.
1 code implementation • 1 Jul 2021 • Andrea Dittadi, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole Winther, Francesco Locatello
The idea behind object-centric representation learning is that natural scenes can better be modeled as compositions of objects and their relations as opposed to distributed representations.
2 code implementations • 28 Feb 2022 • Zhijing Jin, Abhinav Lalwani, Tejas Vaidhya, Xiaoyu Shen, Yiwen Ding, Zhiheng Lyu, Mrinmaya Sachan, Rada Mihalcea, Bernhard Schölkopf
In this paper, we propose the task of logical fallacy detection, and provide a new dataset (Logic) of logical fallacies generally found in text, together with an additional challenge set for detecting logical fallacies in climate change claims (LogicClimate).
3 code implementations • 29 Sep 2022 • Maximilian Seitzer, Max Horn, Andrii Zadaianchuk, Dominik Zietlow, Tianjun Xiao, Carl-Johann Simon-Gabriel, Tong He, Zheng Zhang, Bernhard Schölkopf, Thomas Brox, Francesco Locatello
Humans naturally decompose their environment into entities at the appropriate level of abstraction to act in the world.
2 code implementations • 14 Jun 2020 • Frederik Träuble, Elliot Creager, Niki Kilbertus, Francesco Locatello, Andrea Dittadi, Anirudh Goyal, Bernhard Schölkopf, Stefan Bauer
The focus of disentanglement approaches has been on identifying independent factors of variation in data.
1 code implementation • CVPR 2022 • HANLIN ZHANG, Yi-Fan Zhang, Weiyang Liu, Adrian Weller, Bernhard Schölkopf, Eric P. Xing
To tackle this challenge, we first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG).
1 code implementation • 11 Nov 2015 • David Lopez-Paz, Léon Bottou, Bernhard Schölkopf, Vladimir Vapnik
Distillation (Hinton et al., 2015) and privileged information (Vapnik & Izmailov, 2015) are two techniques that enable machines to learn from other machines.
1 code implementation • 26 Dec 2018 • Chaochao Lu, Bernhard Schölkopf, José Miguel Hernández-Lobato
Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data.
1 code implementation • 26 Jun 2019 • Vincent Stimper, Stefan Bauer, Ralph Ernstorfer, Bernhard Schölkopf, R. Patrick Xian
Contrast enhancement is an important preprocessing technique for improving the performance of downstream tasks in image processing and computer vision.
1 code implementation • 25 May 2022 • Lars Lorch, Scott Sussex, Jonas Rothfuss, Andreas Krause, Bernhard Schölkopf
Rather than searching over structures, we train a variational inference model to directly predict the causal structure from observational or interventional data.
1 code implementation • 21 May 2018 • Saeed Saremi, Arash Mehrjou, Bernhard Schölkopf, Aapo Hyvärinen
We present the utility of DEEN in learning the energy, the score function, and in single-step denoising experiments for synthetic and high-dimensional data.
2 code implementations • NeurIPS 2021 • Lars Lorch, Jonas Rothfuss, Bernhard Schölkopf, Andreas Krause
In this work, we propose a general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation.
1 code implementation • NeurIPS 2021 • Julius von Kügelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, Francesco Locatello
A common practice is to perform data augmentation via hand-crafted transformations intended to leave the semantics of the data invariant.
Ranked #1 on Image Classification on Causal3DIdent
1 code implementation • 23 Jun 2021 • Maximilian Dax, Stephen R. Green, Jonathan Gair, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf
We demonstrate unprecedented accuracy for rapid gravitational-wave parameter estimation with deep learning.
1 code implementation • ICLR 2022 • Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Deistler, Bernhard Schölkopf, Jakob H. Macke
We here describe an alternative method to incorporate equivariances under joint transformations of parameters and data.
1 code implementation • 11 Oct 2022 • Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Pürrer, Jonas Wildberger, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf
This shows a median sample efficiency of $\approx 10\%$ (two orders-of-magnitude better than standard samplers) as well as a ten-fold reduction in the statistical uncertainty in the log evidence.
1 code implementation • ICLR 2022 • Yonggang Zhang, Mingming Gong, Tongliang Liu, Gang Niu, Xinmei Tian, Bo Han, Bernhard Schölkopf, Kun Zhang
The adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
2 code implementations • 8 Aug 2020 • Manuel Wüthrich, Felix Widmaier, Felix Grimminger, Joel Akpo, Shruti Joshi, Vaibhav Agrawal, Bilal Hammoud, Majid Khadiv, Miroslav Bogdanovic, Vincent Berenz, Julian Viereck, Maximilien Naveau, Ludovic Righetti, Bernhard Schölkopf, Stefan Bauer
Dexterous object manipulation remains an open problem in robotics, despite the rapid progress in machine learning during the past decade.
1 code implementation • 29 Oct 2021 • Vincent Stimper, Bernhard Schölkopf, José Miguel Hernández-Lobato
Normalizing flows are a popular class of models for approximating probability distributions.
Ranked #47 on Image Generation on CIFAR-10 (bits/dimension metric)
2 code implementations • NeurIPS 2021 • Maximilian Seitzer, Bernhard Schölkopf, Georg Martius
Many reinforcement learning (RL) environments consist of independent entities that interact sparsely.
1 code implementation • 4 Oct 2022 • Zhijing Jin, Sydney Levine, Fernando Gonzalez, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan, Rada Mihalcea, Josh Tenenbaum, Bernhard Schölkopf
Using a state-of-the-art large language model (LLM) as a basis, we propose a novel moral chain of thought (MORALCOT) prompting strategy that combines the strengths of LLMs with theories of moral reasoning developed in cognitive science to predict human moral judgments.
1 code implementation • 7 Oct 2020 • Sumedh A. Sontakke, Arash Mehrjou, Laurent Itti, Bernhard Schölkopf
Inspired by this, we attempt to equip reinforcement learning agents with the ability to perform experiments that facilitate a categorization of the rolled-out trajectories, and to subsequently infer the causal factors of the environment in a hierarchical manner.
1 code implementation • 31 Jul 2018 • Muhammad Waleed Gondal, Bernhard Schölkopf, Michael Hirsch
Moreover, we show that a texture representation of those deep features better capture the perceptual quality of an image than the original deep features.
1 code implementation • NeurIPS 2020 • Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, Isabel Valera
Recent work has discussed the limitations of counterfactual explanations to recommend actions for algorithmic recourse, and argued for the need of taking causal relationships between features into consideration.
1 code implementation • 13 Oct 2020 • Julius von Kügelgen, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, Bernhard Schölkopf
Algorithmic fairness is typically studied from the perspective of predictions.
1 code implementation • ICLR 2022 • Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel
An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.
2 code implementations • 18 Apr 2019 • Timothy D. Gebhard, Niki Kilbertus, Ian Harry, Bernhard Schölkopf
In the last few years, machine learning techniques, in particular convolutional neural networks, have been investigated as a method to replace or complement traditional matched filtering techniques that are used to detect the gravitational-wave signature of merging black holes.
1 code implementation • 19 Jul 2015 • Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, Jonas Peters
We focus on the problem of Domain Generalization, in which no examples from the test task are observed.
1 code implementation • 9 Feb 2015 • David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, Ilya Tolstikhin
We pose causal inference as the problem of learning to classify probability distributions.
1 code implementation • 12 Jan 2023 • Yuejiang Liu, Alexandre Alahi, Chris Russell, Max Horn, Dominik Zietlow, Bernhard Schölkopf, Francesco Locatello
Recent years have seen a surge of interest in learning high-level causal representations from low-level image pairs under interventions.
1 code implementation • 22 May 2019 • Stratis Tsirtsis, Behzad Tabibian, Moein Khajehnejad, Adish Singla, Bernhard Schölkopf, Manuel Gomez-Rodriguez
Using this characterization, we first show that, in general, we cannot expect to find optimal decision policies in polynomial time and there are cases in which deterministic policies are suboptimal.
1 code implementation • NeurIPS 2021 • Luigi Gresele, Julius von Kügelgen, Vincent Stimper, Bernhard Schölkopf, Michel Besserve
Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process.
2 code implementations • NeurIPS 2018 • Alexander Neitz, Giambattista Parascandolo, Stefan Bauer, Bernhard Schölkopf
We introduce a method which enables a recurrent dynamics model to be temporally abstract.
1 code implementation • 12 Feb 2019 • Diego Agudelo-España, Sebastian Gomez-Gonzalez, Stefan Bauer, Bernhard Schölkopf, Jan Peters
Online detection of instantaneous changes in the generative process of a data sequence generally focuses on retrospective inference of such change points without considering their future occurrences.
1 code implementation • 14 May 2019 • Wittawat Jitkrittum, Patsorn Sangkloy, Muhammad Waleed Gondal, Amit Raj, James Hays, Bernhard Schölkopf
We propose a novel procedure which adds "content-addressability" to any given unconditional implicit model e. g., a generative adversarial network (GAN).
1 code implementation • 6 Jun 2022 • Patrik Reizinger, Luigi Gresele, Jack Brady, Julius von Kügelgen, Dominik Zietlow, Bernhard Schölkopf, Georg Martius, Wieland Brendel, Michel Besserve
Leveraging self-consistency, we show that the ELBO converges to a regularized log-likelihood.
1 code implementation • NeurIPS 2020 • Luigi Gresele, Giancarlo Fissore, Adrián Javaloy, Bernhard Schölkopf, Aapo Hyvärinen
Learning expressive probabilistic models correctly describing the data is a ubiquitous problem in machine learning.
1 code implementation • 3 Mar 2022 • Panagiotis Tigas, Yashas Annadani, Andrew Jesson, Bernhard Schölkopf, Yarin Gal, Stefan Bauer
Existing methods in experimental design for causal discovery from limited data either rely on linear assumptions for the SCM or select only the intervention target.
1 code implementation • 29 May 2023 • Justus Mattern, FatemehSadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick
To investigate whether this fragility provides a layer of safety, we propose and evaluate neighbourhood attacks, which compare model scores for a given sample to scores of synthetically generated neighbour texts and therefore eliminate the need for access to the training data distribution.
1 code implementation • NeurIPS 2023 • Laurence I. Midgley, Vincent Stimper, Javier Antorán, Emile Mathieu, Bernhard Schölkopf, José Miguel Hernández-Lobato
Coupling normalizing flows allow for fast sampling and density evaluation, making them the tool of choice for probabilistic modeling of physical systems.
3 code implementations • 17 Jun 2022 • Jonas M. Kübler, Vincent Stimper, Simon Buchholz, Krikamol Muandet, Bernhard Schölkopf
Two-sample tests are important in statistics and machine learning, both as tools for scientific discovery as well as to detect distribution shifts.
3 code implementations • NeurIPS 2018 • Wittawat Jitkrittum, Heishiro Kanagawa, Patsorn Sangkloy, James Hays, Bernhard Schölkopf, Arthur Gretton
Given two candidate models, and a set of target observations, we address the problem of measuring the relative goodness of fit of the two models.
1 code implementation • 24 Nov 2019 • Bernhard Schölkopf
Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning.
1 code implementation • ICLR 2019 • Carl-Johann Simon-Gabriel, Yann Ollivier, Léon Bottou, Bernhard Schölkopf, David Lopez-Paz
Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.
1 code implementation • ICLR 2022 • Cian Eastwood, Ian Mason, Christopher K. I. Williams, Bernhard Schölkopf
Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain.
1 code implementation • NeurIPS 2023 • Liang Wendong, Armin Kekić, Julius von Kügelgen, Simon Buchholz, Michel Besserve, Luigi Gresele, Bernhard Schölkopf
As a corollary, this interventional perspective also leads to new identifiability results for nonlinear ICA -- a special case of CauCA with an empty graph -- requiring strictly fewer datasets than previous results.
2 code implementations • 28 Jul 2023 • Nico Gürtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Bernhard Schölkopf, Georg Martius
To coordinate the efforts of the research community toward tackling this problem, we propose a benchmark including: i) a large collection of data for offline learning from a dexterous manipulation platform on two tasks, obtained with capable RL agents trained in simulation; ii) the option to execute learned policies on a real-world robotic system and a simulation for efficient debugging.
1 code implementation • NeurIPS 2023 • Maximilian Dax, Jonas Wildberger, Simon Buchholz, Stephen R. Green, Jakob H. Macke, Bernhard Schölkopf
Neural posterior estimation methods based on discrete normalizing flows have become established tools for simulation-based inference (SBI), but scaling them to high-dimensional problems can be challenging.
1 code implementation • 4 May 2018 • Matthias Bauer, Valentin Volchkov, Michael Hirsch, Bernhard Schölkopf
The modulation transfer function (MTF) is widely used to characterise the performance of optical systems.
2 code implementations • CVPR 2017 • David Lopez-Paz, Robert Nishihara, Soumith Chintala, Bernhard Schölkopf, Léon Bottou
Our experiments demonstrate the existence of a relation between the direction of causality and the difference between objects and their contexts, and by the same token, the existence of observable signals that reveal the causal dispositions of objects.
1 code implementation • ICLR 2021 • Yoshua Bengio, Prateek Gupta, Tegan Maharaj, Nasim Rahaman, Martin Weiss, Tristan Deleu, Eilif Muller, Meng Qu, Victor Schmidt, Pierre-Luc St-Charles, Hannah Alsdurf, Olexa Bilanuik, David Buckeridge, Gáetan Marceau Caron, Pierre-Luc Carrier, Joumana Ghosn, Satya Ortiz-Gagne, Chris Pal, Irina Rish, Bernhard Schölkopf, Abhinav Sharma, Jian Tang, Andrew Williams
Predictions are used to provide personalized recommendations to the individual via an app, as well as to send anonymized messages to the individual's contacts, who use this information to better predict their own infectiousness, an approach we call proactive contact tracing (PCT).
1 code implementation • 21 Oct 2022 • Alessandro Stolfo, Zhijing Jin, Kumar Shridhar, Bernhard Schölkopf, Mrinmaya Sachan
By grounding the behavioral analysis in a causal graph describing an intuitive reasoning process, we study the behavior of language models in terms of robustness and sensitivity to direct interventions in the input space.
1 code implementation • 22 Feb 2019 • Gabriele Abbati, Philippe Wenk, Michael A. Osborne, Andreas Krause, Bernhard Schölkopf, Stefan Bauer
Stochastic differential equations are an important modeling class in many disciplines.
2 code implementations • 12 Jun 2020 • Jia-Jie Zhu, Wittawat Jitkrittum, Moritz Diehl, Bernhard Schölkopf
We prove a theorem that generalizes the classical duality in the mathematical problem of moments.
1 code implementation • 14 May 2020 • Julius von Kügelgen, Luigi Gresele, Bernhard Schölkopf
We point out limitations and extensions for future work, and, finally, discuss the role of causal reasoning in the broader context of using AI to combat the Covid-19 pandemic.
Applications Methodology
1 code implementation • 20 May 2020 • Rui Patrick Xian, Vincent Stimper, Marios Zacharias, Shuo Dong, Maciej Dendzik, Samuel Beaulieu, Bernhard Schölkopf, Martin Wolf, Laurenz Rettig, Christian Carbogno, Stefan Bauer, Ralph Ernstorfer
Electronic band structure (BS) and crystal structure are the two complementary identifiers of solid state materials.
Data Analysis, Statistics and Probability Materials Science Computational Physics
3 code implementations • 4 Jun 2018 • Niklas Pfister, Sebastian Weichwald, Peter Bühlmann, Bernhard Schölkopf
We introduce coroICA, confounding-robust independent component analysis, a novel ICA algorithm which decomposes linearly mixed multivariate observations into independent components that are corrupted (and rendered dependent) by hidden group-wise stationary confounding.
1 code implementation • ICML 2018 • Matej Balog, Ilya Tolstikhin, Bernhard Schölkopf
First, releasing (an estimate of) the kernel mean embedding of the data generating random variable instead of the database itself still allows third-parties to construct consistent estimators of a wide class of population statistics.
1 code implementation • ICML 2018 • Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, Bernhard Schölkopf
The approach is unsupervised and based on a set of experts that compete for data generated by the mechanisms, driving specialization.
1 code implementation • 24 Feb 2020 • Wittawat Jitkrittum, Heishiro Kanagawa, Bernhard Schölkopf
We propose two nonparametric statistical tests of goodness of fit for conditional distributions: given a conditional probability density function $p(y|x)$ and a joint sample, decide whether the sample is drawn from $p(y|x)r_x(x)$ for some density $r_x$.
1 code implementation • 19 Jul 2022 • Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello
Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.
Adversarial Robustness Out-of-Distribution Generalization +1
1 code implementation • 25 Jul 2022 • Hamza Keurti, Hsiao-Ru Pan, Michel Besserve, Benjamin F. Grewe, Bernhard Schölkopf
How can agents learn internal models that veridically represent interactions with the real world is a largely open question.
1 code implementation • 5 Mar 2018 • Rohit Babbar, Bernhard Schölkopf
The goal in extreme multi-label classification is to learn a classifier which can assign a small subset of relevant labels to an instance from an extremely large set of target labels.
1 code implementation • 2 May 2016 • Sebastian Weichwald, Arthur Gretton, Bernhard Schölkopf, Moritz Grosse-Wentrup
Causal inference concerns the identification of cause-effect relationships between variables.
2 code implementations • 17 Feb 2019 • Philippe Wenk, Gabriele Abbati, Michael A. Osborne, Bernhard Schölkopf, Andreas Krause, Stefan Bauer
Parameter inference in ordinary differential equations is an important problem in many applied sciences and in engineering, especially in a data-scarce setting.
1 code implementation • 26 Feb 2020 • Matthias R. Hohmann, Lisa Konieczny, Michelle Hackl, Brian Wirth, Talha Zaman, Raffi Enficiaud, Moritz Grosse-Wentrup, Bernhard Schölkopf
We introduce MYND: A framework that couples consumer-grade recording hardware with an easy-to-use application for the unsupervised evaluation of BCI control strategies.
Human-Computer Interaction Neurons and Cognition 68U35 H.5.2
1 code implementation • 12 Feb 2021 • Christian Fröhlich, Alexandra Gessner, Philipp Hennig, Bernhard Schölkopf, Georgios Arvanitidis
Riemannian manifolds provide a principled way to model nonlinear geometric structure inherent in data.
1 code implementation • 2 Mar 2021 • Weiyang Liu, Rongmei Lin, Zhen Liu, Li Xiong, Bernhard Schölkopf, Adrian Weller
Due to the over-parameterization nature, neural networks are a powerful tool for nonlinear function approximation.
1 code implementation • 24 Mar 2021 • Arash Mehrjou, Ashkan Soleymani, Amin Abyaneh, Samir Bhatt, Bernhard Schölkopf, Stefan Bauer
Simulating the spread of infectious diseases in human communities is critical for predicting the trajectory of an epidemic and verifying various policies to control the devastating impacts of the outbreak.
1 code implementation • 3 Jun 2022 • Alexander Hägele, Jonas Rothfuss, Lars Lorch, Vignesh Ram Somnath, Bernhard Schölkopf, Andreas Krause
Inferring causal structures from experimentation is a central task in many domains.
1 code implementation • 14 Jun 2023 • Aaron Spieler, Nasim Rahaman, Georg Martius, Bernhard Schölkopf, Anna Levina
Biological cortical neurons are remarkably sophisticated computational devices, temporally integrating their vast synaptic input over an intricate dendritic tree, subject to complex, nonlinearly interacting internal biological processes.
Ranked #1 on Time Series on neuronIO
1 code implementation • 8 Feb 2019 • Niki Kilbertus, Manuel Gomez-Rodriguez, Bernhard Schölkopf, Krikamol Muandet, Isabel Valera
In this paper, we show that in this selective labels setting, learning a predictor directly only from available labeled data is suboptimal in terms of both fairness and utility.
1 code implementation • 28 May 2019 • Julius von Kügelgen, Alexander Mey, Marco Loog, Bernhard Schölkopf
While the success of semi-supervised learning (SSL) is still not fully understood, Sch\"olkopf et al. (2012) have established a link to the principle of independent causal mechanisms.
3 code implementations • NeurIPS 2019 • Jen Ning Lim, Makoto Yamada, Bernhard Schölkopf, Wittawat Jitkrittum
The first test, building on the post selection inference framework, provably controls the number of best models that are wrongly declared worse (false positive rate).
2 code implementations • 15 Oct 2020 • Rui Zhang, Masaaki Imaizumi, Bernhard Schölkopf, Krikamol Muandet
We investigate a simple objective for nonlinear instrumental variable (IV) regression based on a kernelized conditional moment restriction (CMR) known as a maximum moment restriction (MMR).
1 code implementation • EMNLP 2021 • Zhijing Jin, Julius von Kügelgen, Jingwei Ni, Tejas Vaidhya, Ayush Kaushal, Mrinmaya Sachan, Bernhard Schölkopf
The principle of independent causal mechanisms (ICM) states that generative processes of real world data consist of independent modules which do not influence or inform each other.
1 code implementation • 18 May 2023 • Heiner Kremer, Yassine Nemmour, Bernhard Schölkopf, Jia-Jie Zhu
We provide a variant of our estimator for conditional moment restrictions and show that it is asymptotically first-order optimal for such problems.
1 code implementation • 5 Nov 2023 • Ishan Kumar, Zhijing Jin, Ehsan Mokhtarian, Siyuan Guo, Yuen Chen, Mrinmaya Sachan, Bernhard Schölkopf
Evaluating the significance of a paper is pivotal yet challenging for the scientific community.
5 code implementations • 30 Jan 2019 • Mateo Rojas-Carulla, Ilya Tolstikhin, Guillermo Luque, Nicholas Youngblut, Ruth Ley, Bernhard Schölkopf
We introduce GeNet, a method for shotgun metagenomic classification from raw DNA sequences that exploits the known hierarchical structure between labels for training.
1 code implementation • 6 Jun 2020 • Arash Mehrjou, Mohammad Ghavamzadeh, Bernhard Schölkopf
We provide theoretical results on the class of systems that can be treated with the proposed algorithm and empirically evaluate the effectiveness of our method using an exemplary dynamical system.
1 code implementation • 16 Feb 2021 • Jia-Jie Zhu, Christina Kouridi, Yassine Nemmour, Bernhard Schölkopf
We propose a scalable robust learning algorithm combining kernel smoothing and robust optimization.
1 code implementation • 21 Dec 2021 • Ricardo Dominguez-Olmedo, Amir-Hossein Karimi, Bernhard Schölkopf
Algorithmic recourse seeks to provide actionable recommendations for individuals to overcome unfavorable classification outcomes from automated decision-making systems.
1 code implementation • 31 Aug 2018 • Sebastian Gomez-Gonzalez, Gerhard Neumann, Bernhard Schölkopf, Jan Peters
However, to be able to capture variability and correlations between different joints, a probabilistic movement primitive requires the estimation of a larger number of parameters compared to their deterministic counterparts, that focus on modeling only the mean behavior.
1 code implementation • NeurIPS 2020 • Jonas M. Kübler, Wittawat Jitkrittum, Bernhard Schölkopf, Krikamol Muandet
Modern large-scale kernel-based tests such as maximum mean discrepancy (MMD) and kernelized Stein discrepancy (KSD) optimize kernel hyperparameters on a held-out sample via data splitting to obtain the most powerful test statistics.
1 code implementation • 30 Jun 2021 • Felix Leeb, Stefan Bauer, Michel Besserve, Bernhard Schölkopf
Autoencoders exhibit impressive abilities to embed the data manifold into a low-dimensional latent space, making them a staple of representation learning methods.
1 code implementation • 13 Sep 2021 • Hsiao-Ru Pan, Nico Gürtler, Alexander Neitz, Bernhard Schölkopf
The predominant approach in reinforcement learning is to assign credit to actions based on the expected return.
1 code implementation • 1 Oct 2022 • Cian Eastwood, Andrei Liviu Nicolicioiu, Julius von Kügelgen, Armin Kekić, Frederik Träuble, Andrea Dittadi, Bernhard Schölkopf
In representation learning, a common approach is to seek representations which disentangle the underlying factors of variation.
1 code implementation • 31 Oct 2022 • Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf
We consider the problem of iterative machine teaching, where a teacher sequentially provides examples based on the status of a learner under a discrete input space (i. e., a pool of finite samples), which greatly limits the teacher's capability.
1 code implementation • 12 May 2015 • Bernhard Schölkopf, David W. Hogg, Dun Wang, Daniel Foreman-Mackey, Dominik Janzing, Carl-Johann Simon-Gabriel, Jonas Peters
We describe a method for removing the effect of confounders in order to reconstruct a latent quantity of interest.
1 code implementation • NAACL 2022 • Jingwei Ni, Zhijing Jin, Markus Freitag, Mrinmaya Sachan, Bernhard Schölkopf
We show that these two factors have a large causal effect on the MT performance, in addition to the test-model direction mismatch highlighted by existing work on the impact of translationese.
1 code implementation • Proceedings of Machine Learning Research 2013 • Krikamol Muandet, David Balduzzi, Bernhard Schölkopf
This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains?
1 code implementation • 13 Oct 2022 • Alexander Immer, Christoph Schultheiss, Julia E. Vogt, Bernhard Schölkopf, Peter Bühlmann, Alexander Marx
We study the class of location-scale or heteroscedastic noise models (LSNMs), in which the effect $Y$ can be written as a function of the cause $X$ and a noise source $N$ independent of $X$, which may be scaled by a positive function $g$ over the cause, i. e., $Y = f(X) + g(X)N$.
1 code implementation • 11 Mar 2023 • Weiyang Liu, Longhui Yu, Adrian Weller, Bernhard Schölkopf
We then use hyperspherical uniformity (which characterizes the degree of uniformity on the unit hypersphere) as a unified framework to quantify these two objectives.
1 code implementation • 9 May 2023 • Fernando Gonzalez, Zhijing Jin, Bernhard Schölkopf, Tom Hope, Mrinmaya Sachan, Rada Mihalcea
Using state-of-the-art NLP models, we address each of these tasks and use them on the entire ACL Anthology, resulting in a visualization workspace that gives researchers a comprehensive overview of the field of NLP4SG.
1 code implementation • 6 Jun 2023 • Alexander Immer, Tycho F. A. van der Ouderaa, Mark van der Wilk, Gunnar Rätsch, Bernhard Schölkopf
Recent works show that Bayesian model selection with Laplace approximations can allow to optimize such hyperparameters just like standard neural network parameters using gradients and on the training data.
1 code implementation • 25 Nov 2019 • Jia-Jie Zhu, Krikamol Muandet, Moritz Diehl, Bernhard Schölkopf
This work presents the concept of kernel mean embedding and kernel probabilistic programming in the context of stochastic systems.
1 code implementation • L4DC 2020 • Jia-Jie Zhu, Moritz Diehl, Bernhard Schölkopf
We apply kernel mean embedding methods to sample-based stochastic optimization and control.
1 code implementation • NeurIPS 2021 • Jonas M. Kübler, Simon Buchholz, Bernhard Schölkopf
Quantum computers offer the possibility to efficiently compute inner products of exponentially large density operators that are classically hard to compute.
1 code implementation • 7 Apr 2022 • Timothy D. Gebhard, Markus J. Bonse, Sascha P. Quanz, Bernhard Schölkopf
Our HSR-based method provides an alternative, flexible and promising approach to the challenge of modeling and subtracting the stellar PSF and systematic noise in exoplanet imaging data.
3 code implementations • 7 Nov 2022 • Amin Abyaneh, Nino Scherrer, Patrick Schwab, Stefan Bauer, Bernhard Schölkopf, Arash Mehrjou
We propose FedCDI, a federated framework for inferring causal structures from distributed data containing interventional samples.
1 code implementation • 23 May 2023 • Yuxin Ren, Qipeng Guo, Zhijing Jin, Shauli Ravfogel, Mrinmaya Sachan, Bernhard Schölkopf, Ryan Cotterell
Transformer models bring propelling advances in various NLP tasks, thus inducing lots of interpretability research on the learned representations of the models.
1 code implementation • 28 Mar 2024 • Felix Leeb, Bernhard Schölkopf
Babel Briefings is a novel dataset featuring 4. 7 million news headlines from August 2020 to November 2021, across 30 languages and 54 locations worldwide with English translations of all articles included.
1 code implementation • 5 Mar 2020 • Emmanouil Angelis, Philippe Wenk, Bernhard Schölkopf, Stefan Bauer, Andreas Krause
Gaussian processes are an important regression tool with excellent analytic properties which allow for direct integration of derivative observations.
1 code implementation • 13 Oct 2021 • Matthias Tangemann, Steffen Schneider, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Thomas Brox, Matthias Kümmerer, Matthias Bethge, Bernhard Schölkopf
Learning generative object models from unlabelled videos is a long standing problem and required for causal scene modeling.
1 code implementation • 2 Feb 2022 • Luigi Gresele, Julius von Kügelgen, Jonas M. Kübler, Elke Kirschbaum, Bernhard Schölkopf, Dominik Janzing
We introduce an approach to counterfactual inference based on merging information from multiple datasets.
1 code implementation • 11 Jul 2022 • Heiner Kremer, Jia-Jie Zhu, Krikamol Muandet, Bernhard Schölkopf
Important problems in causal inference, economics, and, more generally, robust machine learning can be expressed as conditional moment restrictions, but estimation becomes challenging as it requires solving a continuum of unconditional moment restrictions.
1 code implementation • 14 Dec 2022 • Armin Kekić, Jonas Dehning, Luigi Gresele, Julius von Kügelgen, Viola Priesemann, Bernhard Schölkopf
Early on during a pandemic, vaccine availability is limited, requiring prioritisation of different population groups.
1 code implementation • 6 Sep 2023 • Timothy D. Gebhard, Daniel Angerhausen, Björn S. Konrad, Eleonora Alei, Sascha P. Quanz, Bernhard Schölkopf
When training and evaluating our method on two publicly available datasets of self-consistent PT profiles, we find that our method achieves, on average, better fit quality than existing baseline methods, despite using fewer parameters.
1 code implementation • 26 Oct 2023 • Lars Lorch, Andreas Krause, Bernhard Schölkopf
We develop a novel approach towards causal inference.
1 code implementation • 22 Jul 2022 • Frederik Träuble, Anirudh Goyal, Nasim Rahaman, Michael Mozer, Kenji Kawaguchi, Yoshua Bengio, Bernhard Schölkopf
Deep neural networks perform well on classification tasks where data streams are i. i. d.
2 code implementations • 7 Feb 2023 • Ahmad-Reza Ehyaei, Amir-Hossein Karimi, Bernhard Schölkopf, Setareh Maghsudi
Algorithmic recourse aims to disclose the inner workings of the black-box decision process in situations where decisions have significant consequences, by providing recommendations to empower beneficiaries to achieve a more favorable outcome.
1 code implementation • 18 Feb 2024 • Francesco Ortu, Zhijing Jin, Diego Doimo, Mrinmaya Sachan, Alberto Cazzaniga, Bernhard Schölkopf
Interpretability research aims to bridge the gap between the empirical success and our scientific understanding of the inner workings of large language models (LLMs).
no code implementations • ICML 2018 • Mehdi S. M. Sajjadi, Giambattista Parascandolo, Arash Mehrjou, Bernhard Schölkopf
A possible explanation for training instabilities is the inherent imbalance between the networks: While the discriminator is trained directly on both real and fake samples, the generator only has control over the fake samples it produces since the real data distribution is fixed by the choice of a given dataset.
no code implementations • ICML 2018 • Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi
Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives.
no code implementations • 30 Apr 2018 • Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf
A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.
no code implementations • 27 May 2018 • Arash Mehrjou, Friedrich Solowjow, Sebastian Trimpe, Bernhard Schölkopf
Apart from its application for encoding a sequence of observations, we propose to use the compression achieved by this encoding as a criterion for model selection.
no code implementations • ICLR 2018 • Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf
To this end, we propose "fidelity-weighted learning" (FWL), a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data.
no code implementations • 16 Mar 2018 • Philipp Geiger, Michel Besserve, Justus Winkelmann, Claudius Proissl, Bernhard Schölkopf
We study data-driven assistants that provide congestion forecasts to users of shared facilities (roads, cafeterias, etc.
no code implementations • 14 Jun 2016 • Philipp Geiger, Katja Hofmann, Bernhard Schölkopf
The amount of digitally available but heterogeneous information about the world is remarkable, and new technologies such as self-driving cars, smart homes, or the internet of things may further increase it.
no code implementations • 19 Feb 2018 • Patrick Blöbaum, Dominik Janzing, Takashi Washio, Shohei Shimizu, Bernhard Schölkopf
We address the problem of inferring the causal direction between two variables by comparing the least-squares errors of the predictions in both possible directions.
no code implementations • NeurIPS 2017 • Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf
Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning.
no code implementations • ICLR 2018 • Matthias Bauer, Mateo Rojas-Carulla, Jakub Bartłomiej Świątkowski, Bernhard Schölkopf, Richard E. Turner
The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples.
no code implementations • 4 Jul 2017 • Paul K. Rubenstein, Sebastian Weichwald, Stephan Bongers, Joris M. Mooij, Dominik Janzing, Moritz Grosse-Wentrup, Bernhard Schölkopf
Complex systems can be modelled at various levels of detail.
no code implementations • NeurIPS 2017 • Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Bernhard Schölkopf, Sergey Levine
Off-policy model-free deep reinforcement learning methods using previously collected data can improve sample efficiency over on-policy policy gradient techniques.
no code implementations • 6 Dec 2016 • Anant Raj, Abhishek Kumar, Youssef Mroueh, P. Thomas Fletcher, Bernhard Schölkopf
We consider transformations that form a \emph{group} and propose an approach based on kernel methods to derive local group invariant representations.
no code implementations • 21 May 2017 • Arash Mehrjou, Bernhard Schölkopf, Saeed Saremi
We introduce a novel framework for adversarial training where the target distribution is annealed between the uniform distribution and the data distribution.
no code implementations • 5 May 2017 • Michel Besserve, Naji Shajarisales, Bernhard Schölkopf, Dominik Janzing
The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms.
no code implementations • ICCV 2017 • Tae Hyun Kim, Kyoung Mu Lee, Bernhard Schölkopf, Michael Hirsch
We show the superiority of the proposed method in an extensive experimental evaluation.
no code implementations • 23 May 2016 • Sebastian Weichwald, Tatiana Fomina, Bernhard Schölkopf, Moritz Grosse-Wentrup
While the channel capacity reflects a theoretical upper bound on the achievable information transmission rate in the limit of infinitely many bits, it does not characterise the information transfer of a given encoding routine with finitely many bits.
no code implementations • 24 Oct 2016 • Behzad Tabibian, Isabel Valera, Mehrdad Farajtabar, Le Song, Bernhard Schölkopf, Manuel Gomez-Rodriguez
Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness.
no code implementations • 27 Mar 2017 • Lei Xiao, Felix Heide, Wolfgang Heidrich, Bernhard Schölkopf, Michael Hirsch
Recently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing trade-off between image quality and computational efficiency.
no code implementations • 31 May 2016 • Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Bernhard Schölkopf
Next, we discuss the Hilbert space embedding for conditional distributions, give theoretical insights, and review some applications.
no code implementations • 1 Mar 2016 • Niklas Pfister, Peter Bühlmann, Bernhard Schölkopf, Jonas Peters
Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation.
no code implementations • NeurIPS 2016 • Carl-Johann Simon-Gabriel, Adam Ścibior, Ilya Tolstikhin, Bernhard Schölkopf
We provide a theoretical foundation for non-parametric estimation of functions of random variables using kernel mean embeddings.
no code implementations • 23 Sep 2016 • Anant Raj, Jakob Olbrich, Bernd Gärtner, Bernhard Schölkopf, Martin Jaggi
We propose a new framework for deriving screening rules for convex optimization problems.
no code implementations • 15 Jul 2016 • Patrick Wieschollek, Bernhard Schölkopf, Hendrik P. A. Lensch, Michael Hirsch
We present a neural network model approach for multi-frame blind deconvolution.
no code implementations • 6 Sep 2016 • Mehdi S. M. Sajjadi, Rolf Köhler, Bernhard Schölkopf, Michael Hirsch
Light field photography captures rich structural information that may facilitate a number of traditional image processing and computer vision tasks.
no code implementations • 27 Sep 2015 • Kun Zhang, Biwei Huang, Jiji Zhang, Bernhard Schölkopf, Clark Glymour
Third, we develop a method for visualizing the nonstationarity of causal modules.
no code implementations • 18 Apr 2016 • Carl-Johann Simon-Gabriel, Bernhard Schölkopf
The RKHS distance of two mapped measures is a semi-metric $d_k$ over $M$.
no code implementations • 21 May 2014 • Krikamol Muandet, Bharath Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf
A mean function in a reproducing kernel Hilbert space (RKHS), or a kernel mean, is central to kernel methods in that it is used by many classical algorithms such as kernel principal component analysis, and it also forms the core inference step of modern kernel methods that rely on embedding probability distributions in RKHSs.
no code implementations • 11 Dec 2014 • Joris M. Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler, Bernhard Schölkopf
We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data.
no code implementations • 14 Nov 2014 • Philipp Geiger, Kun Zhang, Mingming Gong, Dominik Janzing, Bernhard Schölkopf
A widely applied approach to causal inference from a non-experimental time series $X$, often referred to as "(linear) Granger causal analysis", is to regress present on past and interpret the regression matrix $\hat{B}$ causally.
no code implementations • 15 Dec 2015 • Sebastian Weichwald, Bernhard Schölkopf, Tonio Ball, Moritz Grosse-Wentrup
Pattern recognition in neuroimaging distinguishes between two types of models: encoding- and decoding models.
no code implementations • 14 Dec 2015 • Sebastian Weichwald, Timm Meyer, Bernhard Schölkopf, Tonio Ball, Moritz Grosse-Wentrup
While invasively recorded brain activity is known to provide detailed information on motor commands, it is an open question at what level of detail information about positions of body parts can be decoded from non-invasively acquired signals.
no code implementations • 15 Nov 2015 • Sebastian Weichwald, Timm Meyer, Ozan Özdenizci, Bernhard Schölkopf, Tonio Ball, Moritz Grosse-Wentrup
Causal terminology is often introduced in the interpretation of encoding and decoding models trained on neuroimaging data.
no code implementations • 22 Apr 2015 • Kun Zhang, Jiji Zhang, Bernhard Schölkopf
Recent developments in structural equation modeling have produced several methods that can usually distinguish cause from effect in the two-variable case.
no code implementations • 27 Jan 2015 • Bernhard Schölkopf, Krikamol Muandet, Kenji Fukumizu, Jonas Peters
We describe a method to perform functional operations on probability distributions of random variables.
no code implementations • NeurIPS 2014 • Krikamol Muandet, Bharath Sriperumbudur, Bernhard Schölkopf
The problem of estimating the kernel mean in a reproducing kernel Hilbert space (RKHS) is central to kernel methods in that it is used by classical approaches (e. g., when centering a kernel PCA matrix), and it also forms the core inference step of modern kernel methods (e. g., kernel-based non-parametric tests) that rely on embedding probability distributions in RKHSs.
no code implementations • 28 Jun 2014 • Christian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf
We describe a learning-based approach to blind image deconvolution.
no code implementations • 1 Feb 2014 • David Lopez-Paz, Suvrit Sra, Alex Smola, Zoubin Ghahramani, Bernhard Schölkopf
Although nonlinear variants of PCA and CCA have been proposed, these are computationally prohibitive in the large scale.
no code implementations • 26 Sep 2013 • Jonas Peters, Joris Mooij, Dominik Janzing, Bernhard Schölkopf
We consider the problem of learning causal directed acyclic graphs from an observational joint distribution.
no code implementations • 11 Feb 2014 • Dominik Janzing, Bastian Steudel, Naji Shajarisales, Bernhard Schölkopf
Information Geometric Causal Inference (IGCI) is a new approach to distinguish between cause and effect for two variables.
no code implementations • 19 Dec 2013 • Samory Kpotufe, Eleni Sgouritsa, Dominik Janzing, Bernhard Schölkopf
We analyze a family of methods for statistical causal inference from sample under the so-called Additive Noise Model.
no code implementations • 31 Oct 2013 • Mikhail Langovoy, Michael Habeck, Bernhard Schölkopf
We specifically address the problem of detection of multiple objects of unknown shapes in the case of nonparametric noise.
no code implementations • 4 Jun 2013 • Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Arthur Gretton, Bernhard Schölkopf
A mean function in reproducing kernel Hilbert space, or a kernel mean, is an important part of many applications ranging from kernel principal component analysis to Hilbert-space embedding of distributions.
no code implementations • NeurIPS 2013 • David Lopez-Paz, Philipp Hennig, Bernhard Schölkopf
We introduce the Randomized Dependence Coefficient (RDC), a measure of non-linear dependence between random variables of arbitrary dimension based on the Hirschfeld-Gebelein-R\'enyi Maximum Correlation Coefficient.
no code implementations • 1 Mar 2013 • Krikamol Muandet, Bernhard Schölkopf
We propose one-class support measure machines (OCSMMs) for group anomaly detection which aims at recognizing anomalous aggregate behaviors of data points.
no code implementations • 30 Apr 2013 • Joris M. Mooij, Dominik Janzing, Bernhard Schölkopf
We show how, and under which conditions, the equilibrium states of a first-order Ordinary Differential Equation (ODE) system can be described with a deterministic Structural Causal Model (SCM).
no code implementations • 30 Jul 2009 • Bharath K. Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Schölkopf, Gert R. G. Lanckriet
First, we consider the question of determining the conditions on the kernel $k$ for which $\gamma_k$ is a metric: such $k$ are denoted {\em characteristic kernels}.
no code implementations • 3 May 2011 • Manuel Gomez Rodriguez, David Balduzzi, Bernhard Schölkopf
Time plays an essential role in the diffusion of information, influence and disease over networks.
no code implementations • 20 Jul 2018 • Eduardo Pérez-Pellitero, Mehdi S. M. Sajjadi, Michael Hirsch, Bernhard Schölkopf
Together with a video discriminator, we also propose additional loss functions to further reinforce temporal consistency in the generated sequences.
no code implementations • 18 Jan 2009 • Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, Gert R. G. Lanckriet
First, to understand the relation between IPMs and $\phi$-divergences, the necessary and sufficient conditions under which these classes intersect are derived: the total variation distance is shown to be the only non-trivial $\phi$-divergence that is also an IPM.
Information Theory Information Theory
no code implementations • 31 Oct 2018 • Raphael Suter, Đorđe Miladinović, Bernhard Schölkopf, Stefan Bauer
The ability to learn disentangled representations that split underlying sources of variation in high dimensional, unstructured data is important for data efficient and robust use of neural networks.
no code implementations • 14 Nov 2018 • Arash Mehrjou, Bernhard Schölkopf
Filtering is a general name for inferring the states of a dynamical system given observations.
no code implementations • 3 Dec 2018 • Niki Kilbertus, Giambattista Parascandolo, Bernhard Schölkopf
Anti-causal models are used to drive this search, but a causal model is required for validation.
no code implementations • ICLR 2020 • Michel Besserve, Arash Mehrjou, Rémy Sun, Bernhard Schölkopf
Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data.
no code implementations • NeurIPS 2016 • Ilya O. Tolstikhin, Bharath K. Sriperumbudur, Bernhard Schölkopf
Maximum Mean Discrepancy (MMD) is a distance on the space of probability measures which has found numerous applications in machine learning and nonparametric testing.
no code implementations • NeurIPS 2013 • Jonas Peters, Dominik Janzing, Bernhard Schölkopf
We study a class of restricted Structural Equation Models for time series that we call Time Series Models with Independent Noise (TiMINo).
no code implementations • NeurIPS 2013 • Michel Besserve, Nikos K. Logothetis, Bernhard Schölkopf
This framework enables us to develop an independence test between time series as well as a similarity measure to compare different types of coupling.
no code implementations • NeurIPS 2012 • Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, Bernhard Schölkopf
This paper presents a kernel-based discriminative learning framework on probability measures.
no code implementations • NeurIPS 2012 • Francesco Dinuzzo, Bernhard Schölkopf
In particular, the main result of this paper implies that, for a sufficiently large family of regularization functionals, radial nondecreasing functions are the only lower semicontinuous regularization terms that guarantee existence of a representer theorem for any choice of the data.
no code implementations • NeurIPS 2012 • David Lopez-Paz, Jose M. Hernández-Lobato, Bernhard Schölkopf
A new framework based on the theory of copulas is proposed to address semi-supervised domain adaptation problems.
no code implementations • NeurIPS 2011 • Joris M. Mooij, Dominik Janzing, Tom Heskes, Bernhard Schölkopf
We study a particular class of cyclic causal models, where each variable is a (possibly nonlinear) function of its parents and additive noise.
no code implementations • NeurIPS 2011 • Carsten Rother, Martin Kiefel, Lumin Zhang, Bernhard Schölkopf, Peter V. Gehler
We address the challenging task of decoupling material properties from lighting properties given a single image.
no code implementations • NeurIPS 2010 • Mauricio Alvarez, Jan R. Peters, Neil D. Lawrence, Bernhard Schölkopf
Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function.
no code implementations • NeurIPS 2010 • Stefan Harmeling, Hirsch Michael, Bernhard Schölkopf
Modelling camera shake as a space-invariant convolution simplifies the problem of removing camera shake, but often insufficiently models actual motion blur such as those due to camera rotation and movements outside the sensor plane or when objects in the scene have different distances to the camera.
no code implementations • NeurIPS 2010 • Oliver Stegle, Dominik Janzing, Kun Zhang, Joris M. Mooij, Bernhard Schölkopf
To this end, we consider the hypothetical effect variable to be a function of the hypothetical cause variable and an independent noise term (not necessarily additive).