Search Results for author: Bernhard Schölkopf

Found 342 papers, 142 papers with code

Learning explanations that are hard to vary

3 code implementations ICLR 2021 Giambattista Parascandolo, Alexander Neitz, Antonio Orvieto, Luigi Gresele, Bernhard Schölkopf

In this paper, we investigate the principle that `good explanations are hard to vary' in the context of deep learning.

Memorization

Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization

1 code implementation10 Nov 2023 Weiyang Liu, Zeju Qiu, Yao Feng, Yuliang Xiu, Yuxuan Xue, Longhui Yu, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf

We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT).

Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion

1 code implementation27 Jan 2023 Flavio Schneider, Ojasv Kamal, Zhijing Jin, Bernhard Schölkopf

Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another "language" of communication -- music.

Image Generation Music Generation +1

From Variational to Deterministic Autoencoders

4 code implementations ICLR 2020 Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Schölkopf

Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models.

Density Estimation

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

8 code implementations ICML 2019 Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem

The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Disentanglement

Weakly-Supervised Disentanglement Without Compromises

3 code implementations ICML 2020 Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen

Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.

Disentanglement Fairness

Probable Domain Generalization via Quantile Risk Minimization

2 code implementations20 Jul 2022 Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed Hassani, George J. Pappas, Bernhard Schölkopf

By minimizing the $\alpha$-quantile of predictor's risk distribution over domains, QRM seeks predictors that perform well with probability $\alpha$.

Domain Generalization

Flow Annealed Importance Sampling Bootstrap

3 code implementations3 Aug 2022 Laurence Illing Midgley, Vincent Stimper, Gregor N. C. Simm, Bernhard Schölkopf, José Miguel Hernández-Lobato

Normalizing flows are tractable density models that can approximate complicated target distributions, e. g. Boltzmann distributions of physical systems.

Recurrent Independent Mechanisms

3 code implementations ICLR 2021 Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf

Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes.

Learning Blind Motion Deblurring

1 code implementation ICCV 2017 Patrick Wieschollek, Michael Hirsch, Bernhard Schölkopf, Hendrik P. A. Lensch

As handheld video cameras are now commonplace and available in every smartphone, images and videos can be recorded almost everywhere at anytime.

Deblurring

Learning Neural Causal Models from Unknown Interventions

2 code implementations2 Oct 2019 Nan Rosemary Ke, Olexa Bilaniuk, Anirudh Goyal, Stefan Bauer, Hugo Larochelle, Bernhard Schölkopf, Michael C. Mozer, Chris Pal, Yoshua Bengio

Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data.

Meta-Learning

Algorithmic Recourse: from Counterfactual Explanations to Interventions

2 code implementations14 Feb 2020 Amir-Hossein Karimi, Bernhard Schölkopf, Isabel Valera

As machine learning is increasingly used to inform consequential decision-making (e. g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision.

counterfactual Decision Making

CLadder: Assessing Causal Reasoning in Language Models

1 code implementation NeurIPS 2023 Zhijing Jin, Yuen Chen, Felix Leeb, Luigi Gresele, Ojasv Kamal, Zhiheng Lyu, Kevin Blin, Fernando Gonzalez Adauto, Max Kleiman-Weiner, Mrinmaya Sachan, Bernhard Schölkopf

Much of the existing work in natural language processing (NLP) focuses on evaluating commonsense causal reasoning in LLMs, thus failing to assess whether a model can perform causal inference in accordance with a set of well-defined formal rules.

Causal Inference Commonsense Causal Reasoning +1

AdaGAN: Boosting Generative Models

1 code implementation NeurIPS 2017 Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, Bernhard Schölkopf

Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images.

Quantifying the Effects of Contact Tracing, Testing, and Containment Measures in the Presence of Infection Hotspots

2 code implementations15 Apr 2020 Lars Lorch, Heiner Kremer, William Trouleau, Stratis Tsirtsis, Aron Szanto, Bernhard Schölkopf, Manuel Gomez-Rodriguez

Multiple lines of evidence strongly suggest that infection hotspots, where a single individual infects many others, play a key role in the transmission dynamics of COVID-19.

Bayesian Optimization Point Processes

Can Large Language Models Infer Causation from Correlation?

1 code implementation9 Jun 2023 Zhijing Jin, Jiarui Liu, Zhiheng Lyu, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona Diab, Bernhard Schölkopf

In this work, we propose the first benchmark dataset to test the pure causal inference skills of large language models (LLMs).

Causal Inference

Generalization and Robustness Implications in Object-Centric Learning

1 code implementation1 Jul 2021 Andrea Dittadi, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole Winther, Francesco Locatello

The idea behind object-centric representation learning is that natural scenes can better be modeled as compositions of objects and their relations as opposed to distributed representations.

Inductive Bias Object +3

Logical Fallacy Detection

2 code implementations28 Feb 2022 Zhijing Jin, Abhinav Lalwani, Tejas Vaidhya, Xiaoyu Shen, Yiwen Ding, Zhiheng Lyu, Mrinmaya Sachan, Rada Mihalcea, Bernhard Schölkopf

In this paper, we propose the task of logical fallacy detection, and provide a new dataset (Logic) of logical fallacies generally found in text, together with an additional challenge set for detecting logical fallacies in climate change claims (LogicClimate).

Language Modelling Logical Fallacies +2

Towards Principled Disentanglement for Domain Generalization

1 code implementation CVPR 2022 HANLIN ZHANG, Yi-Fan Zhang, Weiyang Liu, Adrian Weller, Bernhard Schölkopf, Eric P. Xing

To tackle this challenge, we first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG).

Disentanglement Domain Generalization

Unifying distillation and privileged information

1 code implementation11 Nov 2015 David Lopez-Paz, Léon Bottou, Bernhard Schölkopf, Vladimir Vapnik

Distillation (Hinton et al., 2015) and privileged information (Vapnik & Izmailov, 2015) are two techniques that enable machines to learn from other machines.

Deconfounding Reinforcement Learning in Observational Settings

1 code implementation26 Dec 2018 Chaochao Lu, Bernhard Schölkopf, José Miguel Hernández-Lobato

Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data.

OpenAI Gym reinforcement-learning +1

Multidimensional Contrast Limited Adaptive Histogram Equalization

1 code implementation26 Jun 2019 Vincent Stimper, Stefan Bauer, Ralph Ernstorfer, Bernhard Schölkopf, R. Patrick Xian

Contrast enhancement is an important preprocessing technique for improving the performance of downstream tasks in image processing and computer vision.

Amortized Inference for Causal Structure Learning

1 code implementation25 May 2022 Lars Lorch, Scott Sussex, Jonas Rothfuss, Andreas Krause, Bernhard Schölkopf

Rather than searching over structures, we train a variational inference model to directly predict the causal structure from observational or interventional data.

Causal Discovery Inductive Bias +1

Deep Energy Estimator Networks

1 code implementation21 May 2018 Saeed Saremi, Arash Mehrjou, Bernhard Schölkopf, Aapo Hyvärinen

We present the utility of DEEN in learning the energy, the score function, and in single-step denoising experiments for synthetic and high-dimensional data.

Denoising Density Estimation

DiBS: Differentiable Bayesian Structure Learning

2 code implementations NeurIPS 2021 Lars Lorch, Jonas Rothfuss, Bernhard Schölkopf, Andreas Krause

In this work, we propose a general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation.

Causal Discovery Variational Inference

Real-time gravitational-wave science with neural posterior estimation

1 code implementation23 Jun 2021 Maximilian Dax, Stephen R. Green, Jonathan Gair, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf

We demonstrate unprecedented accuracy for rapid gravitational-wave parameter estimation with deep learning.

Group equivariant neural posterior estimation

1 code implementation ICLR 2022 Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Deistler, Bernhard Schölkopf, Jakob H. Macke

We here describe an alternative method to incorporate equivariances under joint transformations of parameters and data.

Neural Importance Sampling for Rapid and Reliable Gravitational-Wave Inference

1 code implementation11 Oct 2022 Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Pürrer, Jonas Wildberger, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf

This shows a median sample efficiency of $\approx 10\%$ (two orders-of-magnitude better than standard samplers) as well as a ten-fold reduction in the statistical uncertainty in the log evidence.

Resampling Base Distributions of Normalizing Flows

1 code implementation29 Oct 2021 Vincent Stimper, Bernhard Schölkopf, José Miguel Hernández-Lobato

Normalizing flows are a popular class of models for approximating probability distributions.

Ranked #47 on Image Generation on CIFAR-10 (bits/dimension metric)

Density Estimation Image Generation

When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment

1 code implementation4 Oct 2022 Zhijing Jin, Sydney Levine, Fernando Gonzalez, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan, Rada Mihalcea, Josh Tenenbaum, Bernhard Schölkopf

Using a state-of-the-art large language model (LLM) as a basis, we propose a novel moral chain of thought (MORALCOT) prompting strategy that combines the strengths of LLMs with theories of moral reasoning developed in cognitive science to predict human moral judgments.

Language Modelling Large Language Model +1

Causal Curiosity: RL Agents Discovering Self-supervised Experiments for Causal Representation Learning

1 code implementation7 Oct 2020 Sumedh A. Sontakke, Arash Mehrjou, Laurent Itti, Bernhard Schölkopf

Inspired by this, we attempt to equip reinforcement learning agents with the ability to perform experiments that facilitate a categorization of the rolled-out trajectories, and to subsequently infer the causal factors of the environment in a hierarchical manner.

Representation Learning Zero-Shot Learning

The Unreasonable Effectiveness of Texture Transfer for Single Image Super-resolution

1 code implementation31 Jul 2018 Muhammad Waleed Gondal, Bernhard Schölkopf, Michael Hirsch

Moreover, we show that a texture representation of those deep features better capture the perceptual quality of an image than the original deep features.

General Classification Image Reconstruction +1

Algorithmic recourse under imperfect causal knowledge: a probabilistic approach

1 code implementation NeurIPS 2020 Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, Isabel Valera

Recent work has discussed the limitations of counterfactual explanations to recommend actions for algorithmic recourse, and argued for the need of taking causal relationships between features into consideration.

counterfactual

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

1 code implementation ICLR 2022 Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.

Representation Learning

Convolutional neural networks: a magic bullet for gravitational-wave detection?

2 code implementations18 Apr 2019 Timothy D. Gebhard, Niki Kilbertus, Ian Harry, Bernhard Schölkopf

In the last few years, machine learning techniques, in particular convolutional neural networks, have been investigated as a method to replace or complement traditional matched filtering techniques that are used to detect the gravitational-wave signature of merging black holes.

Astronomy BIG-bench Machine Learning +1

Invariant Models for Causal Transfer Learning

1 code implementation19 Jul 2015 Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, Jonas Peters

We focus on the problem of Domain Generalization, in which no examples from the test task are observed.

Domain Generalization Transfer Learning

Optimal Decision Making Under Strategic Behavior

1 code implementation22 May 2019 Stratis Tsirtsis, Behzad Tabibian, Moein Khajehnejad, Adish Singla, Bernhard Schölkopf, Manuel Gomez-Rodriguez

Using this characterization, we first show that, in general, we cannot expect to find optimal decision policies in polynomial time and there are cases in which deterministic policies are suboptimal.

Decision Making

Bayesian Online Prediction of Change Points

1 code implementation12 Feb 2019 Diego Agudelo-España, Sebastian Gomez-Gonzalez, Stefan Bauer, Bernhard Schölkopf, Jan Peters

Online detection of instantaneous changes in the generative process of a data sequence generally focuses on retrospective inference of such change points without considering their future occurrences.

Bayesian Inference Change Point Detection

Kernel Mean Matching for Content Addressability of GANs

1 code implementation14 May 2019 Wittawat Jitkrittum, Patsorn Sangkloy, Muhammad Waleed Gondal, Amit Raj, James Hays, Bernhard Schölkopf

We propose a novel procedure which adds "content-addressability" to any given unconditional implicit model e. g., a generative adversarial network (GAN).

Generative Adversarial Network Image Generation

Interventions, Where and How? Experimental Design for Causal Models at Scale

1 code implementation3 Mar 2022 Panagiotis Tigas, Yashas Annadani, Andrew Jesson, Bernhard Schölkopf, Yarin Gal, Stefan Bauer

Existing methods in experimental design for causal discovery from limited data either rely on linear assumptions for the SCM or select only the intervention target.

Causal Discovery Experimental Design

Membership Inference Attacks against Language Models via Neighbourhood Comparison

1 code implementation29 May 2023 Justus Mattern, FatemehSadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick

To investigate whether this fragility provides a layer of safety, we propose and evaluate neighbourhood attacks, which compare model scores for a given sample to scores of synthetically generated neighbour texts and therefore eliminate the need for access to the training data distribution.

SE(3) Equivariant Augmented Coupling Flows

1 code implementation NeurIPS 2023 Laurence I. Midgley, Vincent Stimper, Javier Antorán, Emile Mathieu, Bernhard Schölkopf, José Miguel Hernández-Lobato

Coupling normalizing flows allow for fast sampling and density evaluation, making them the tool of choice for probabilistic modeling of physical systems.

AutoML Two-Sample Test

3 code implementations17 Jun 2022 Jonas M. Kübler, Vincent Stimper, Simon Buchholz, Krikamol Muandet, Bernhard Schölkopf

Two-sample tests are important in statistics and machine learning, both as tools for scientific discovery as well as to detect distribution shifts.

AutoML Two-sample testing +1

Informative Features for Model Comparison

3 code implementations NeurIPS 2018 Wittawat Jitkrittum, Heishiro Kanagawa, Patsorn Sangkloy, James Hays, Bernhard Schölkopf, Arthur Gretton

Given two candidate models, and a set of target observations, we address the problem of measuring the relative goodness of fit of the two models.

Causality for Machine Learning

1 code implementation24 Nov 2019 Bernhard Schölkopf

Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning.

BIG-bench Machine Learning Causal Inference

First-order Adversarial Vulnerability of Neural Networks and Input Dimension

1 code implementation ICLR 2019 Carl-Johann Simon-Gabriel, Yann Ollivier, Léon Bottou, Bernhard Schölkopf, David Lopez-Paz

Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.

Source-Free Adaptation to Measurement Shift via Bottom-Up Feature Restoration

1 code implementation ICLR 2022 Cian Eastwood, Ian Mason, Christopher K. I. Williams, Bernhard Schölkopf

Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain.

Source-Free Domain Adaptation

Causal Component Analysis

1 code implementation NeurIPS 2023 Liang Wendong, Armin Kekić, Julius von Kügelgen, Simon Buchholz, Michel Besserve, Luigi Gresele, Bernhard Schölkopf

As a corollary, this interventional perspective also leads to new identifiability results for nonlinear ICA -- a special case of CauCA with an empty graph -- requiring strictly fewer datasets than previous results.

Representation Learning

Benchmarking Offline Reinforcement Learning on Real-Robot Hardware

2 code implementations28 Jul 2023 Nico Gürtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Bernhard Schölkopf, Georg Martius

To coordinate the efforts of the research community toward tackling this problem, we propose a benchmark including: i) a large collection of data for offline learning from a dexterous manipulation platform on two tasks, obtained with capable RL agents trained in simulation; ii) the option to execute learned policies on a real-world robotic system and a simulation for efficient debugging.

Benchmarking reinforcement-learning

Flow Matching for Scalable Simulation-Based Inference

1 code implementation NeurIPS 2023 Maximilian Dax, Jonas Wildberger, Simon Buchholz, Stephen R. Green, Jakob H. Macke, Bernhard Schölkopf

Neural posterior estimation methods based on discrete normalizing flows have become established tools for simulation-based inference (SBI), but scaling them to high-dimensional problems can be challenging.

Automatic Estimation of Modulation Transfer Functions

1 code implementation4 May 2018 Matthias Bauer, Valentin Volchkov, Michael Hirsch, Bernhard Schölkopf

The modulation transfer function (MTF) is widely used to characterise the performance of optical systems.

Discovering Causal Signals in Images

2 code implementations CVPR 2017 David Lopez-Paz, Robert Nishihara, Soumith Chintala, Bernhard Schölkopf, Léon Bottou

Our experiments demonstrate the existence of a relation between the direction of causality and the difference between objects and their contexts, and by the same token, the existence of observable signals that reveal the causal dispositions of objects.

Causal Discovery

Predicting Infectiousness for Proactive Contact Tracing

1 code implementation ICLR 2021 Yoshua Bengio, Prateek Gupta, Tegan Maharaj, Nasim Rahaman, Martin Weiss, Tristan Deleu, Eilif Muller, Meng Qu, Victor Schmidt, Pierre-Luc St-Charles, Hannah Alsdurf, Olexa Bilanuik, David Buckeridge, Gáetan Marceau Caron, Pierre-Luc Carrier, Joumana Ghosn, Satya Ortiz-Gagne, Chris Pal, Irina Rish, Bernhard Schölkopf, Abhinav Sharma, Jian Tang, Andrew Williams

Predictions are used to provide personalized recommendations to the individual via an app, as well as to send anonymized messages to the individual's contacts, who use this information to better predict their own infectiousness, an approach we call proactive contact tracing (PCT).

A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models

1 code implementation21 Oct 2022 Alessandro Stolfo, Zhijing Jin, Kumar Shridhar, Bernhard Schölkopf, Mrinmaya Sachan

By grounding the behavioral analysis in a causal graph describing an intuitive reasoning process, we study the behavior of language models in terms of robustness and sensitivity to direct interventions in the input space.

Math Mathematical Reasoning

Kernel Distributionally Robust Optimization

2 code implementations12 Jun 2020 Jia-Jie Zhu, Wittawat Jitkrittum, Moritz Diehl, Bernhard Schölkopf

We prove a theorem that generalizes the classical duality in the mathematical problem of moments.

Stochastic Optimization

Simpson's paradox in Covid-19 case fatality rates: a mediation analysis of age-related causal effects

1 code implementation14 May 2020 Julius von Kügelgen, Luigi Gresele, Bernhard Schölkopf

We point out limitations and extensions for future work, and, finally, discuss the role of causal reasoning in the broader context of using AI to combat the Covid-19 pandemic.

Applications Methodology

A machine learning route between band mapping and band structure

1 code implementation20 May 2020 Rui Patrick Xian, Vincent Stimper, Marios Zacharias, Shuo Dong, Maciej Dendzik, Samuel Beaulieu, Bernhard Schölkopf, Martin Wolf, Laurenz Rettig, Christian Carbogno, Stefan Bauer, Ralph Ernstorfer

Electronic band structure (BS) and crystal structure are the two complementary identifiers of solid state materials.

Data Analysis, Statistics and Probability Materials Science Computational Physics

Robustifying Independent Component Analysis by Adjusting for Group-Wise Stationary Noise

3 code implementations4 Jun 2018 Niklas Pfister, Sebastian Weichwald, Peter Bühlmann, Bernhard Schölkopf

We introduce coroICA, confounding-robust independent component analysis, a novel ICA algorithm which decomposes linearly mixed multivariate observations into independent components that are corrupted (and rendered dependent) by hidden group-wise stationary confounding.

Causal Inference EEG

Differentially Private Database Release via Kernel Mean Embeddings

1 code implementation ICML 2018 Matej Balog, Ilya Tolstikhin, Bernhard Schölkopf

First, releasing (an estimate of) the kernel mean embedding of the data generating random variable instead of the database itself still allows third-parties to construct consistent estimators of a wide class of population statistics.

Learning Independent Causal Mechanisms

1 code implementation ICML 2018 Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, Bernhard Schölkopf

The approach is unsupervised and based on a set of experts that compete for data generated by the mechanisms, driving specialization.

Transfer Learning

Testing Goodness of Fit of Conditional Density Models with Kernels

1 code implementation24 Feb 2020 Wittawat Jitkrittum, Heishiro Kanagawa, Bernhard Schölkopf

We propose two nonparametric statistical tests of goodness of fit for conditional distributions: given a conditional probability density function $p(y|x)$ and a joint sample, decide whether the sample is drawn from $p(y|x)r_x(x)$ for some density $r_x$.

Two-sample testing

Assaying Out-Of-Distribution Generalization in Transfer Learning

1 code implementation19 Jul 2022 Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello

Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.

Adversarial Robustness Out-of-Distribution Generalization +1

Adversarial Extreme Multi-label Classification

1 code implementation5 Mar 2018 Rohit Babbar, Bernhard Schölkopf

The goal in extreme multi-label classification is to learn a classifier which can assign a small subset of relevant labels to an instance from an extremely large set of target labels.

Classification Extreme Multi-Label Classification +1

ODIN: ODE-Informed Regression for Parameter and State Inference in Time-Continuous Dynamical Systems

2 code implementations17 Feb 2019 Philippe Wenk, Gabriele Abbati, Michael A. Osborne, Bernhard Schölkopf, Andreas Krause, Stefan Bauer

Parameter inference in ordinary differential equations is an important problem in many applied sciences and in engineering, especially in a data-scarce setting.

Gaussian Processes Model Selection +1

MYND: Unsupervised Evaluation of Novel BCI Control Strategies on Consumer Hardware

1 code implementation26 Feb 2020 Matthias R. Hohmann, Lisa Konieczny, Michelle Hackl, Brian Wirth, Talha Zaman, Raffi Enficiaud, Moritz Grosse-Wentrup, Bernhard Schölkopf

We introduce MYND: A framework that couples consumer-grade recording hardware with an easy-to-use application for the unsupervised evaluation of BCI control strategies.

Human-Computer Interaction Neurons and Cognition 68U35 H.5.2

Bayesian Quadrature on Riemannian Data Manifolds

1 code implementation12 Feb 2021 Christian Fröhlich, Alexandra Gessner, Philipp Hennig, Bernhard Schölkopf, Georgios Arvanitidis

Riemannian manifolds provide a principled way to model nonlinear geometric structure inherent in data.

Learning with Hyperspherical Uniformity

1 code implementation2 Mar 2021 Weiyang Liu, Rongmei Lin, Zhen Liu, Li Xiong, Bernhard Schölkopf, Adrian Weller

Due to the over-parameterization nature, neural networks are a powerful tool for nonlinear function approximation.

Inductive Bias L2 Regularization

Pyfectious: An individual-level simulator to discover optimal containment polices for epidemic diseases

1 code implementation24 Mar 2021 Arash Mehrjou, Ashkan Soleymani, Amin Abyaneh, Samir Bhatt, Bernhard Schölkopf, Stefan Bauer

Simulating the spread of infectious diseases in human communities is critical for predicting the trajectory of an epidemic and verifying various policies to control the devastating impacts of the outbreak.

The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks

1 code implementation14 Jun 2023 Aaron Spieler, Nasim Rahaman, Georg Martius, Bernhard Schölkopf, Anna Levina

Biological cortical neurons are remarkably sophisticated computational devices, temporally integrating their vast synaptic input over an intricate dendritic tree, subject to complex, nonlinearly interacting internal biological processes.

16k Classification +4

Fair Decisions Despite Imperfect Predictions

1 code implementation8 Feb 2019 Niki Kilbertus, Manuel Gomez-Rodriguez, Bernhard Schölkopf, Krikamol Muandet, Isabel Valera

In this paper, we show that in this selective labels setting, learning a predictor directly only from available labeled data is suboptimal in terms of both fairness and utility.

Causal Inference Decision Making +1

Semi-Supervised Learning, Causality and the Conditional Cluster Assumption

1 code implementation28 May 2019 Julius von Kügelgen, Alexander Mey, Marco Loog, Bernhard Schölkopf

While the success of semi-supervised learning (SSL) is still not fully understood, Sch\"olkopf et al. (2012) have established a link to the principle of independent causal mechanisms.

Kernel Stein Tests for Multiple Model Comparison

3 code implementations NeurIPS 2019 Jen Ning Lim, Makoto Yamada, Bernhard Schölkopf, Wittawat Jitkrittum

The first test, building on the post selection inference framework, provably controls the number of best models that are wrongly declared worse (false positive rate).

Instrumental Variable Regression via Kernel Maximum Moment Loss

2 code implementations15 Oct 2020 Rui Zhang, Masaaki Imaizumi, Bernhard Schölkopf, Krikamol Muandet

We investigate a simple objective for nonlinear instrumental variable (IV) regression based on a kernelized conditional moment restriction (CMR) known as a maximum moment restriction (MMR).

regression

Causal Direction of Data Collection Matters: Implications of Causal and Anticausal Learning for NLP

1 code implementation EMNLP 2021 Zhijing Jin, Julius von Kügelgen, Jingwei Ni, Tejas Vaidhya, Ayush Kaushal, Mrinmaya Sachan, Bernhard Schölkopf

The principle of independent causal mechanisms (ICM) states that generative processes of real world data consist of independent modules which do not influence or inform each other.

Causal Inference Domain Adaptation

Estimation Beyond Data Reweighting: Kernel Method of Moments

1 code implementation18 May 2023 Heiner Kremer, Yassine Nemmour, Bernhard Schölkopf, Jia-Jie Zhu

We provide a variant of our estimator for conditional moment restrictions and show that it is asymptotically first-order optimal for such problems.

Causal Inference

GeNet: Deep Representations for Metagenomics

5 code implementations30 Jan 2019 Mateo Rojas-Carulla, Ilya Tolstikhin, Guillermo Luque, Nicholas Youngblut, Ruth Ley, Bernhard Schölkopf

We introduce GeNet, a method for shotgun metagenomic classification from raw DNA sequences that exploits the known hierarchical structure between labels for training.

General Classification

Neural Lyapunov Redesign

1 code implementation6 Jun 2020 Arash Mehrjou, Mohammad Ghavamzadeh, Bernhard Schölkopf

We provide theoretical results on the class of systems that can be treated with the proposed algorithm and empirically evaluate the effectiveness of our method using an exemplary dynamical system.

Adversarially Robust Kernel Smoothing

1 code implementation16 Feb 2021 Jia-Jie Zhu, Christina Kouridi, Yassine Nemmour, Bernhard Schölkopf

We propose a scalable robust learning algorithm combining kernel smoothing and robust optimization.

BIG-bench Machine Learning

On the Adversarial Robustness of Causal Algorithmic Recourse

1 code implementation21 Dec 2021 Ricardo Dominguez-Olmedo, Amir-Hossein Karimi, Bernhard Schölkopf

Algorithmic recourse seeks to provide actionable recommendations for individuals to overcome unfavorable classification outcomes from automated decision-making systems.

Adversarial Robustness Decision Making

Adaptation and Robust Learning of Probabilistic Movement Primitives

1 code implementation31 Aug 2018 Sebastian Gomez-Gonzalez, Gerhard Neumann, Bernhard Schölkopf, Jan Peters

However, to be able to capture variability and correlations between different joints, a probabilistic movement primitive requires the estimation of a larger number of parameters compared to their deterministic counterparts, that focus on modeling only the mean behavior.

Learning Kernel Tests Without Data Splitting

1 code implementation NeurIPS 2020 Jonas M. Kübler, Wittawat Jitkrittum, Bernhard Schölkopf, Krikamol Muandet

Modern large-scale kernel-based tests such as maximum mean discrepancy (MMD) and kernelized Stein discrepancy (KSD) optimize kernel hyperparameters on a held-out sample via data splitting to obtain the most powerful test statistics.

Exploring the Latent Space of Autoencoders with Interventional Assays

1 code implementation30 Jun 2021 Felix Leeb, Stefan Bauer, Michel Besserve, Bernhard Schölkopf

Autoencoders exhibit impressive abilities to embed the data manifold into a low-dimensional latent space, making them a staple of representation learning methods.

Disentanglement

Direct Advantage Estimation

1 code implementation13 Sep 2021 Hsiao-Ru Pan, Nico Gürtler, Alexander Neitz, Bernhard Schölkopf

The predominant approach in reinforcement learning is to assign credit to actions based on the expected return.

Iterative Teaching by Data Hallucination

1 code implementation31 Oct 2022 Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf

We consider the problem of iterative machine teaching, where a teacher sequentially provides examples based on the status of a learner under a discrete input space (i. e., a pool of finite samples), which greatly limits the teacher's capability.

Hallucination

Original or Translated? A Causal Analysis of the Impact of Translationese on Machine Translation Performance

1 code implementation NAACL 2022 Jingwei Ni, Zhijing Jin, Markus Freitag, Mrinmaya Sachan, Bernhard Schölkopf

We show that these two factors have a large causal effect on the MT performance, in addition to the test-model direction mismatch highlighted by existing work on the impact of translationese.

Machine Translation Translation

Domain generalization via invariant feature representation

1 code implementation Proceedings of Machine Learning Research 2013 Krikamol Muandet, David Balduzzi, Bernhard Schölkopf

This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains?

Domain Generalization

On the Identifiability and Estimation of Causal Location-Scale Noise Models

1 code implementation13 Oct 2022 Alexander Immer, Christoph Schultheiss, Julia E. Vogt, Bernhard Schölkopf, Peter Bühlmann, Alexander Marx

We study the class of location-scale or heteroscedastic noise models (LSNMs), in which the effect $Y$ can be written as a function of the cause $X$ and a noise source $N$ independent of $X$, which may be scaled by a positive function $g$ over the cause, i. e., $Y = f(X) + g(X)N$.

Causal Discovery Causal Inference

Generalizing and Decoupling Neural Collapse via Hyperspherical Uniformity Gap

1 code implementation11 Mar 2023 Weiyang Liu, Longhui Yu, Adrian Weller, Bernhard Schölkopf

We then use hyperspherical uniformity (which characterizes the degree of uniformity on the unit hypersphere) as a unified framework to quantify these two objectives.

Beyond Good Intentions: Reporting the Research Landscape of NLP for Social Good

1 code implementation9 May 2023 Fernando Gonzalez, Zhijing Jin, Bernhard Schölkopf, Tom Hope, Mrinmaya Sachan, Rada Mihalcea

Using state-of-the-art NLP models, we address each of these tasks and use them on the entire ACL Anthology, resulting in a visualization workspace that gives researchers a comprehensive overview of the field of NLP4SG.

Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels

1 code implementation6 Jun 2023 Alexander Immer, Tycho F. A. van der Ouderaa, Mark van der Wilk, Gunnar Rätsch, Bernhard Schölkopf

Recent works show that Bayesian model selection with Laplace approximations can allow to optimize such hyperparameters just like standard neural network parameters using gradients and on the training data.

Hyperparameter Optimization Model Selection

The Inductive Bias of Quantum Kernels

1 code implementation NeurIPS 2021 Jonas M. Kübler, Simon Buchholz, Bernhard Schölkopf

Quantum computers offer the possibility to efficiently compute inner products of exponentially large density operators that are classically hard to compute.

Inductive Bias Quantum Machine Learning

Half-sibling regression meets exoplanet imaging: PSF modeling and subtraction using a flexible, domain knowledge-driven, causal framework

1 code implementation7 Apr 2022 Timothy D. Gebhard, Markus J. Bonse, Sascha P. Quanz, Bernhard Schölkopf

Our HSR-based method provides an alternative, flexible and promising approach to the challenge of modeling and subtracting the stellar PSF and systematic noise in exoplanet imaging data.

Denoising Pupil Tracking +1

Federated Causal Discovery From Interventions

3 code implementations7 Nov 2022 Amin Abyaneh, Nino Scherrer, Patrick Schwab, Stefan Bauer, Bernhard Schölkopf, Arash Mehrjou

We propose FedCDI, a federated framework for inferring causal structures from distributed data containing interventional samples.

Causal Discovery Federated Learning +1

All Roads Lead to Rome? Exploring the Invariance of Transformers' Representations

1 code implementation23 May 2023 Yuxin Ren, Qipeng Guo, Zhijing Jin, Shauli Ravfogel, Mrinmaya Sachan, Bernhard Schölkopf, Ryan Cotterell

Transformer models bring propelling advances in various NLP tasks, thus inducing lots of interpretability research on the learned representations of the models.

A diverse Multilingual News Headlines Dataset from around the World

1 code implementation28 Mar 2024 Felix Leeb, Bernhard Schölkopf

Babel Briefings is a novel dataset featuring 4. 7 million news headlines from August 2020 to November 2021, across 30 languages and 54 locations worldwide with English translations of all articles included.

SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for Gaussian Process Regression with Derivatives

1 code implementation5 Mar 2020 Emmanouil Angelis, Philippe Wenk, Bernhard Schölkopf, Stefan Bauer, Andreas Krause

Gaussian processes are an important regression tool with excellent analytic properties which allow for direct integration of derivative observations.

Gaussian Processes regression

Functional Generalized Empirical Likelihood Estimation for Conditional Moment Restrictions

1 code implementation11 Jul 2022 Heiner Kremer, Jia-Jie Zhu, Krikamol Muandet, Bernhard Schölkopf

Important problems in causal inference, economics, and, more generally, robust machine learning can be expressed as conditional moment restrictions, but estimation becomes challenging as it requires solving a continuum of unconditional moment restrictions.

BIG-bench Machine Learning Causal Inference

Parameterizing pressure-temperature profiles of exoplanet atmospheres with neural networks

1 code implementation6 Sep 2023 Timothy D. Gebhard, Daniel Angerhausen, Björn S. Konrad, Eleonora Alei, Sascha P. Quanz, Bernhard Schölkopf

When training and evaluating our method on two publicly available datasets of self-consistent PT profiles, we find that our method achieves, on average, better fit quality than existing baseline methods, despite using fewer parameters.

Bayesian Inference

Robustness Implies Fairness in Causal Algorithmic Recourse

2 code implementations7 Feb 2023 Ahmad-Reza Ehyaei, Amir-Hossein Karimi, Bernhard Schölkopf, Setareh Maghsudi

Algorithmic recourse aims to disclose the inner workings of the black-box decision process in situations where decisions have significant consequences, by providing recommendations to empower beneficiaries to achieve a more favorable outcome.

Adversarial Robustness Fairness

Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals

1 code implementation18 Feb 2024 Francesco Ortu, Zhijing Jin, Diego Doimo, Mrinmaya Sachan, Alberto Cazzaniga, Bernhard Schölkopf

Interpretability research aims to bridge the gap between the empirical success and our scientific understanding of the inner workings of large language models (LLMs).

Tempered Adversarial Networks

no code implementations ICML 2018 Mehdi S. M. Sajjadi, Giambattista Parascandolo, Arash Mehrjou, Bernhard Schölkopf

A possible explanation for training instabilities is the inherent imbalance between the networks: While the discriminator is trained directly on both real and fake samples, the generator only has control over the fake samples it produces since the real data distribution is fixed by the choice of a given dataset.

On Matching Pursuit and Coordinate Descent

no code implementations ICML 2018 Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi

Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives.

Competitive Training of Mixtures of Independent Deep Generative Models

no code implementations30 Apr 2018 Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf

A common assumption in causal modeling posits that the data is generated by a set of independent mechanisms, and algorithms should aim to recover this structure.

Clustering

A Local Information Criterion for Dynamical Systems

no code implementations27 May 2018 Arash Mehrjou, Friedrich Solowjow, Sebastian Trimpe, Bernhard Schölkopf

Apart from its application for encoding a sequence of observations, we propose to use the compression achieved by this encoding as a criterion for model selection.

Model Selection

Fidelity-Weighted Learning

no code implementations ICLR 2018 Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf

To this end, we propose "fidelity-weighted learning" (FWL), a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data.

Ad-Hoc Information Retrieval Information Retrieval +1

Experimental and causal view on information integration in autonomous agents

no code implementations14 Jun 2016 Philipp Geiger, Katja Hofmann, Bernhard Schölkopf

The amount of digitally available but heterogeneous information about the world is remarkable, and new technologies such as self-driving cars, smart homes, or the internet of things may further increase it.

Decision Making Self-Driving Cars +1

Analysis of cause-effect inference by comparing regression errors

no code implementations19 Feb 2018 Patrick Blöbaum, Dominik Janzing, Takashi Washio, Shohei Shimizu, Bernhard Schölkopf

We address the problem of inferring the causal direction between two variables by comparing the least-squares errors of the predictions in both possible directions.

Causal Inference regression

Avoiding Discrimination through Causal Reasoning

no code implementations NeurIPS 2017 Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf

Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning.

Attribute Fairness

Local Group Invariant Representations via Orbit Embeddings

no code implementations6 Dec 2016 Anant Raj, Abhishek Kumar, Youssef Mroueh, P. Thomas Fletcher, Bernhard Schölkopf

We consider transformations that form a \emph{group} and propose an approach based on kernel methods to derive local group invariant representations.

Rotated MNIST

Annealed Generative Adversarial Networks

no code implementations21 May 2017 Arash Mehrjou, Bernhard Schölkopf, Saeed Saremi

We introduce a novel framework for adversarial training where the target distribution is annealed between the uniform distribution and the data distribution.

Group invariance principles for causal generative models

no code implementations5 May 2017 Michel Besserve, Naji Shajarisales, Bernhard Schölkopf, Dominik Janzing

The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms.

BIG-bench Machine Learning Causal Discovery

A note on the expected minimum error probability in equientropic channels

no code implementations23 May 2016 Sebastian Weichwald, Tatiana Fomina, Bernhard Schölkopf, Moritz Grosse-Wentrup

While the channel capacity reflects a theoretical upper bound on the achievable information transmission rate in the limit of infinitely many bits, it does not characterise the information transfer of a given encoding routine with finitely many bits.

Distilling Information Reliability and Source Trustworthiness from Digital Traces

no code implementations24 Oct 2016 Behzad Tabibian, Isabel Valera, Mehrdad Farajtabar, Le Song, Bernhard Schölkopf, Manuel Gomez-Rodriguez

Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness.

Discriminative Transfer Learning for General Image Restoration

no code implementations27 Mar 2017 Lei Xiao, Felix Heide, Wolfgang Heidrich, Bernhard Schölkopf, Michael Hirsch

Recently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing trade-off between image quality and computational efficiency.

Computational Efficiency Deblurring +3

Kernel Mean Embedding of Distributions: A Review and Beyond

no code implementations31 May 2016 Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Bernhard Schölkopf

Next, we discuss the Hilbert space embedding for conditional distributions, give theoretical insights, and review some applications.

Causal Discovery Two-sample testing

Kernel-based Tests for Joint Independence

no code implementations1 Mar 2016 Niklas Pfister, Peter Bühlmann, Bernhard Schölkopf, Jonas Peters

Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation.

Causal Discovery

Screening Rules for Convex Problems

no code implementations23 Sep 2016 Anant Raj, Jakob Olbrich, Bernd Gärtner, Bernhard Schölkopf, Martin Jaggi

We propose a new framework for deriving screening rules for convex optimization problems.

Depth Estimation Through a Generative Model of Light Field Synthesis

no code implementations6 Sep 2016 Mehdi S. M. Sajjadi, Rolf Köhler, Bernhard Schölkopf, Michael Hirsch

Light field photography captures rich structural information that may facilitate a number of traditional image processing and computer vision tasks.

Depth Estimation

Kernel Mean Shrinkage Estimators

no code implementations21 May 2014 Krikamol Muandet, Bharath Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf

A mean function in a reproducing kernel Hilbert space (RKHS), or a kernel mean, is central to kernel methods in that it is used by many classical algorithms such as kernel principal component analysis, and it also forms the core inference step of modern kernel methods that rely on embedding probability distributions in RKHSs.

Distinguishing cause from effect using observational data: methods and benchmarks

no code implementations11 Dec 2014 Joris M. Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler, Bernhard Schölkopf

We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data.

Causal Discovery Causal Inference +1

Causal Inference by Identification of Vector Autoregressive Processes with Hidden Components

no code implementations14 Nov 2014 Philipp Geiger, Kun Zhang, Mingming Gong, Dominik Janzing, Bernhard Schölkopf

A widely applied approach to causal inference from a non-experimental time series $X$, often referred to as "(linear) Granger causal analysis", is to regress present on past and interpret the regression matrix $\hat{B}$ causally.

Causal Inference Time Series +2

Causal and anti-causal learning in pattern recognition for neuroimaging

no code implementations15 Dec 2015 Sebastian Weichwald, Bernhard Schölkopf, Tonio Ball, Moritz Grosse-Wentrup

Pattern recognition in neuroimaging distinguishes between two types of models: encoding- and decoding models.

Causal Inference

Decoding index finger position from EEG using random forests

no code implementations14 Dec 2015 Sebastian Weichwald, Timm Meyer, Bernhard Schölkopf, Tonio Ball, Moritz Grosse-Wentrup

While invasively recorded brain activity is known to provide detailed information on motor commands, it is an open question at what level of detail information about positions of body parts can be decoded from non-invasively acquired signals.

EEG Open-Ended Question Answering +1

Causal interpretation rules for encoding and decoding models in neuroimaging

no code implementations15 Nov 2015 Sebastian Weichwald, Timm Meyer, Ozan Özdenizci, Bernhard Schölkopf, Tonio Ball, Moritz Grosse-Wentrup

Causal terminology is often introduced in the interpretation of encoding and decoding models trained on neuroimaging data.

EEG

Distinguishing Cause from Effect Based on Exogeneity

no code implementations22 Apr 2015 Kun Zhang, Jiji Zhang, Bernhard Schölkopf

Recent developments in structural equation modeling have produced several methods that can usually distinguish cause from effect in the two-variable case.

Causal Inference

Kernel Mean Estimation via Spectral Filtering

no code implementations NeurIPS 2014 Krikamol Muandet, Bharath Sriperumbudur, Bernhard Schölkopf

The problem of estimating the kernel mean in a reproducing kernel Hilbert space (RKHS) is central to kernel methods in that it is used by classical approaches (e. g., when centering a kernel PCA matrix), and it also forms the core inference step of modern kernel methods (e. g., kernel-based non-parametric tests) that rely on embedding probability distributions in RKHSs.

Randomized Nonlinear Component Analysis

no code implementations1 Feb 2014 David Lopez-Paz, Suvrit Sra, Alex Smola, Zoubin Ghahramani, Bernhard Schölkopf

Although nonlinear variants of PCA and CCA have been proposed, these are computationally prohibitive in the large scale.

Clustering

Causal Discovery with Continuous Additive Noise Models

no code implementations26 Sep 2013 Jonas Peters, Joris Mooij, Dominik Janzing, Bernhard Schölkopf

We consider the problem of learning causal directed acyclic graphs from an observational joint distribution.

Causal Discovery regression

Justifying Information-Geometric Causal Inference

no code implementations11 Feb 2014 Dominik Janzing, Bastian Steudel, Naji Shajarisales, Bernhard Schölkopf

Information Geometric Causal Inference (IGCI) is a new approach to distinguish between cause and effect for two variables.

Causal Inference

Consistency of Causal Inference under the Additive Noise Model

no code implementations19 Dec 2013 Samory Kpotufe, Eleni Sgouritsa, Dominik Janzing, Bernhard Schölkopf

We analyze a family of methods for statistical causal inference from sample under the so-called Additive Noise Model.

Causal Inference

Spatial statistics, image analysis and percolation theory

no code implementations31 Oct 2013 Mikhail Langovoy, Michael Habeck, Bernhard Schölkopf

We specifically address the problem of detection of multiple objects of unknown shapes in the case of nonparametric noise.

object-detection Object Detection +1

Kernel Mean Estimation and Stein's Effect

no code implementations4 Jun 2013 Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Arthur Gretton, Bernhard Schölkopf

A mean function in reproducing kernel Hilbert space, or a kernel mean, is an important part of many applications ranging from kernel principal component analysis to Hilbert-space embedding of distributions.

The Randomized Dependence Coefficient

no code implementations NeurIPS 2013 David Lopez-Paz, Philipp Hennig, Bernhard Schölkopf

We introduce the Randomized Dependence Coefficient (RDC), a measure of non-linear dependence between random variables of arbitrary dimension based on the Hirschfeld-Gebelein-R\'enyi Maximum Correlation Coefficient.

One-Class Support Measure Machines for Group Anomaly Detection

no code implementations1 Mar 2013 Krikamol Muandet, Bernhard Schölkopf

We propose one-class support measure machines (OCSMMs) for group anomaly detection which aims at recognizing anomalous aggregate behaviors of data points.

Group Anomaly Detection

From Ordinary Differential Equations to Structural Causal Models: the deterministic case

no code implementations30 Apr 2013 Joris M. Mooij, Dominik Janzing, Bernhard Schölkopf

We show how, and under which conditions, the equilibrium states of a first-order Ordinary Differential Equation (ODE) system can be described with a deterministic Structural Causal Model (SCM).

Hilbert space embeddings and metrics on probability measures

no code implementations30 Jul 2009 Bharath K. Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Schölkopf, Gert R. G. Lanckriet

First, we consider the question of determining the conditions on the kernel $k$ for which $\gamma_k$ is a metric: such $k$ are denoted {\em characteristic kernels}.

Dimensionality Reduction

Uncovering the Temporal Dynamics of Diffusion Networks

no code implementations3 May 2011 Manuel Gomez Rodriguez, David Balduzzi, Bernhard Schölkopf

Time plays an essential role in the diffusion of information, influence and disease over networks.

Perceptual Video Super Resolution with Enhanced Temporal Consistency

no code implementations20 Jul 2018 Eduardo Pérez-Pellitero, Mehdi S. M. Sajjadi, Michael Hirsch, Bernhard Schölkopf

Together with a video discriminator, we also propose additional loss functions to further reinforce temporal consistency in the generated sequences.

Image Super-Resolution Video Super-Resolution

On integral probability metrics, φ-divergences and binary classification

no code implementations18 Jan 2009 Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, Gert R. G. Lanckriet

First, to understand the relation between IPMs and $\phi$-divergences, the necessary and sufficient conditions under which these classes intersect are derived: the total variation distance is shown to be the only non-trivial $\phi$-divergence that is also an IPM.

Information Theory Information Theory

Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness

no code implementations31 Oct 2018 Raphael Suter, Đorđe Miladinović, Bernhard Schölkopf, Stefan Bauer

The ability to learn disentangled representations that split underlying sources of variation in high dimensional, unstructured data is important for data efficient and robust use of neural networks.

Disentanglement

Deep Nonlinear Non-Gaussian Filtering for Dynamical Systems

no code implementations14 Nov 2018 Arash Mehrjou, Bernhard Schölkopf

Filtering is a general name for inferring the states of a dynamical system given observations.

Generalization in anti-causal learning

no code implementations3 Dec 2018 Niki Kilbertus, Giambattista Parascandolo, Bernhard Schölkopf

Anti-causal models are used to drive this search, but a causal model is required for validation.

BIG-bench Machine Learning

Counterfactuals uncover the modular structure of deep generative models

no code implementations ICLR 2020 Michel Besserve, Arash Mehrjou, Rémy Sun, Bernhard Schölkopf

Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data.

counterfactual Style Transfer

Minimax Estimation of Maximum Mean Discrepancy with Radial Kernels

no code implementations NeurIPS 2016 Ilya O. Tolstikhin, Bharath K. Sriperumbudur, Bernhard Schölkopf

Maximum Mean Discrepancy (MMD) is a distance on the space of probability measures which has found numerous applications in machine learning and nonparametric testing.

Causal Inference on Time Series using Restricted Structural Equation Models

no code implementations NeurIPS 2013 Jonas Peters, Dominik Janzing, Bernhard Schölkopf

We study a class of restricted Structural Equation Models for time series that we call Time Series Models with Independent Noise (TiMINo).

Causal Inference Time Series +1

Statistical analysis of coupled time series with Kernel Cross-Spectral Density operators.

no code implementations NeurIPS 2013 Michel Besserve, Nikos K. Logothetis, Bernhard Schölkopf

This framework enables us to develop an independence test between time series as well as a similarity measure to compare different types of coupling.

Time Series Time Series Analysis

The representer theorem for Hilbert spaces: a necessary and sufficient condition

no code implementations NeurIPS 2012 Francesco Dinuzzo, Bernhard Schölkopf

In particular, the main result of this paper implies that, for a sufficiently large family of regularization functionals, radial nondecreasing functions are the only lower semicontinuous regularization terms that guarantee existence of a representer theorem for any choice of the data.

On Causal Discovery with Cyclic Additive Noise Models

no code implementations NeurIPS 2011 Joris M. Mooij, Dominik Janzing, Tom Heskes, Bernhard Schölkopf

We study a particular class of cyclic causal models, where each variable is a (possibly nonlinear) function of its parents and additive noise.

Causal Discovery regression

Recovering Intrinsic Images with a Global Sparsity Prior on Reflectance

no code implementations NeurIPS 2011 Carsten Rother, Martin Kiefel, Lumin Zhang, Bernhard Schölkopf, Peter V. Gehler

We address the challenging task of decoupling material properties from lighting properties given a single image.

Switched Latent Force Models for Movement Segmentation

no code implementations NeurIPS 2010 Mauricio Alvarez, Jan R. Peters, Neil D. Lawrence, Bernhard Schölkopf

Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function.

Space-Variant Single-Image Blind Deconvolution for Removing Camera Shake

no code implementations NeurIPS 2010 Stefan Harmeling, Hirsch Michael, Bernhard Schölkopf

Modelling camera shake as a space-invariant convolution simplifies the problem of removing camera shake, but often insufficiently models actual motion blur such as those due to camera rotation and movements outside the sensor plane or when objects in the scene have different distances to the camera.

Probabilistic latent variable models for distinguishing between cause and effect

no code implementations NeurIPS 2010 Oliver Stegle, Dominik Janzing, Kun Zhang, Joris M. Mooij, Bernhard Schölkopf

To this end, we consider the hypothetical effect variable to be a function of the hypothetical cause variable and an independent noise term (not necessarily additive).

Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.