no code implementations • 12 Jul 2024 • Maximilian Dax, Stephen R. Green, Jonathan Gair, Nihar Gupte, Michael Pürrer, Vivien Raymond, Jonas Wildberger, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf

Mergers of binary neutron stars (BNSs) emit signals in both the gravitational-wave (GW) and electromagnetic (EM) spectra.

1 code implementation • 2 Jul 2024 • Zhijing Jin, Sydney Levine, Max Kleiman-Weiner, Giorgio Piatti, Jiarui Liu, Fernando Gonzalez Adauto, Francesco Ortu, András Strausz, Mrinmaya Sachan, Rada Mihalcea, Yejin Choi, Bernhard Schölkopf

As large language models (LLMs) are deployed in more and more real-world situations, it is crucial to understand their decision-making when faced with moral dilemmas.

no code implementations • 2 Jul 2024 • Jonathan Thomm, Michael Hersche, Giacomo Camposampiero, Aleksandar Terzić, Bernhard Schölkopf, Abbas Rahimi

This results in a Differentiable Tree Experts model with a constant number of parameters for any arbitrary number of steps in the computation, compared to the previous method in the Differentiable Tree Machine with a linear growth.

no code implementations • 29 Jun 2024 • Yujia Zheng, Zeyu Tang, Yiwen Qiu, Bernhard Schölkopf, Kun Zhang

Therefore, rather than merely viewing it as a bias, we explore the causal structure of selection in sequential data to delve deeper into the complete causal process.

no code implementations • 27 Jun 2024 • Shaobo Cui, Zhijing Jin, Bernhard Schölkopf, Boi Faltings

Understanding commonsense causality is a unique mark of intelligence for humans.

no code implementations • 27 Jun 2024 • Amartya Sanyal, Yaxi Hu, Yaodong Yu, Yian Ma, Yixin Wang, Bernhard Schölkopf

"Accuracy-on-the-line" is a widely observed phenomenon in machine learning, where a model's accuracy on in-distribution (ID) and out-of-distribution (OOD) data is positively correlated across different hyperparameters and data configurations.

no code implementations • 26 Jun 2024 • Alizée Pace, Bernhard Schölkopf, Gunnar Rätsch, Giorgia Ramponi

Drawing on insights from both the offline RL and the preference-based RL literature, our algorithm employs a pessimistic approach for out-of-distribution data, and an optimistic approach for acquiring informative preferences about the optimal policy.

no code implementations • 24 Jun 2024 • Sidak Pal Singh, Linara Adilova, Michael Kamp, Asja Fischer, Bernhard Schölkopf, Thomas Hofmann

In this work, we take a step towards understanding it by providing a model of how the loss landscape needs to behave topographically for LMC (or the lack thereof) to manifest.

no code implementations • 20 Jun 2024 • Patrik Reizinger, Siyuan Guo, Ferenc Huszár, Bernhard Schölkopf, Wieland Brendel

We provide a unified framework, termed Identifiable Exchangeable Mechanisms (IEM), for representation and structure learning under the lens of exchangeability.

no code implementations • 17 Jun 2024 • Weronika Ormaniec, Scott Sussex, Lars Lorch, Bernhard Schölkopf, Andreas Krause

Moreover, contrary to the post-hoc standardization of data generated by standard SCMs, we prove that linear iSCMs are less identifiable from prior knowledge on the weights and do not collapse to deterministic relationships in large systems, which may make iSCMs a useful model in causal inference beyond the benchmarking problem studied here.

no code implementations • 6 Jun 2024 • Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, Weiyang Liu

Motivated by the large progress made by large language models (LLMs), we introduce the framework of verbalized machine learning (VML).

no code implementations • 4 Jun 2024 • Robin SM Chan, Reda Boumasmoud, Anej Svete, Yuxin Ren, Qipeng Guo, Zhijing Jin, Shauli Ravfogel, Mrinmaya Sachan, Bernhard Schölkopf, Mennatallah El-Assady, Ryan Cotterell

In this spirit, we study the properties of \emph{affine} alignment of language encoders and its implications on extrinsic similarity.

1 code implementation • 30 May 2024 • Roberto Ceraolo, Dmitrii Kharlapenko, Amélie Reymond, Rada Mihalcea, Mrinmaya Sachan, Bernhard Schölkopf, Zhijing Jin

Humans have an innate drive to seek out causality.

no code implementations • 29 May 2024 • Siyuan Guo, Chi Zhang, Karthika Mohan, Ferenc Huszár, Bernhard Schölkopf

We study causal effect estimation in a setting where the data are not i. i. d.

no code implementations • 24 May 2024 • Siyuan Guo, Aniket Didolkar, Nan Rosemary Ke, Anirudh Goyal, Ferenc Huszár, Bernhard Schölkopf

By contrast, certain instruction-tuning leads to similar performance changes irrespective of training on different data, suggesting a lack of domain understanding across different skills.

1 code implementation • 23 May 2024 • Zhijing Jin, Nils Heil, Jiarui Liu, Shehzaad Dhuliawala, Yahang Qi, Bernhard Schölkopf, Rada Mihalcea, Mrinmaya Sachan

This work systematically studies IP through a rigorous mathematical formulation, a multi-perspective moral reasoning framework, and a set of case studies.

1 code implementation • 19 May 2024 • Heiner Kremer, Bernhard Schölkopf

Instrumental variable (IV) regression can be approached through its formulation in terms of conditional moment restrictions (CMR).

1 code implementation • 2 May 2024 • Zhijing Jin, Yuen Chen, Fernando Gonzalez, Jiarui Liu, Jiayi Zhang, Julian Michael, Bernhard Schölkopf, Mona Diab

We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions, named entities, and in the final inference step where the LLM must connect its reasoning over the AMR to its prediction.

1 code implementation • 25 Apr 2024 • Giorgio Piatti, Zhijing Jin, Max Kleiman-Weiner, Bernhard Schölkopf, Mrinmaya Sachan, Rada Mihalcea

This environment enables the study of how ethical considerations, strategic planning, and negotiation skills impact cooperative outcomes.

no code implementations • 23 Apr 2024 • Anson Lei, Frederik Nolte, Bernhard Schölkopf, Ingmar Posner

COMET is trained on multiple environments with varying dynamics via a two-step process: competition and composition.

1 code implementation • 28 Mar 2024 • Felix Leeb, Bernhard Schölkopf

Babel Briefings is a novel dataset featuring 4. 7 million news headlines from August 2020 to November 2021, across 30 languages and 54 locations worldwide with English translations of all articles included.

no code implementations • 21 Mar 2024 • Nasim Rahaman, Martin Weiss, Manuel Wüthrich, Yoshua Bengio, Li Erran Li, Chris Pal, Bernhard Schölkopf

This work addresses the buyer's inspection paradox for information markets.

no code implementations • 19 Mar 2024 • Yaxi Hu, Amartya Sanyal, Bernhard Schölkopf

When analysing Differentially Private (DP) machine learning pipelines, the potential privacy cost of data-dependent pre-processing is frequently overlooked in privacy accounting.

no code implementations • 12 Mar 2024 • Sidak Pal Singh, Bobby He, Thomas Hofmann, Bernhard Schölkopf

We propose a fresh take on understanding the mechanisms of neural networks by analyzing the rich directional structure of optimization trajectories, represented by their pointwise parameters.

no code implementations • 20 Feb 2024 • Hsiao-Ru Pan, Bernhard Schölkopf

Learning from off-policy data is essential for sample-efficient reinforcement learning.

2 code implementations • 18 Feb 2024 • Francesco Ortu, Zhijing Jin, Diego Doimo, Mrinmaya Sachan, Alberto Cazzaniga, Bernhard Schölkopf

Interpretability research aims to bridge the gap between empirical success and our scientific understanding of the inner workings of large language models (LLMs).

no code implementations • 14 Feb 2024 • Goutham Rajendran, Simon Buchholz, Bryon Aragam, Bernhard Schölkopf, Pradeep Ravikumar

In this work, we relate these two approaches and study how to learn human-interpretable concepts from data.

no code implementations • 8 Feb 2024 • Jonathan Thomm, Aleksandar Terzic, Giacomo Camposampiero, Michael Hersche, Bernhard Schölkopf, Abbas Rahimi

We analyze the capabilities of Transformer language models in learning compositional discrete tasks.

no code implementations • 6 Feb 2024 • Tarun Gupta, Wenbo Gong, Chao Ma, Nick Pawlowski, Agrin Hilmkil, Meyer Scetbon, Marc Rigter, Ade Famoti, Ashley Juan Llorens, Jianfeng Gao, Stefan Bauer, Danica Kragic, Bernhard Schölkopf, Cheng Zhang

The study of causality lends itself to the construction of veridical world models, which are crucial for accurately predicting the outcomes of possible interactions.

no code implementations • 2 Feb 2024 • Alice Bizeul, Bernhard Schölkopf, Carl Allen

Addressing this, we present a generative latent variable model for self-supervised learning and show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations, providing a unifying theoretical framework for these methods.

1 code implementation • 31 Jan 2024 • Andreas Opedal, Alessandro Stolfo, Haruki Shirakami, Ying Jiao, Ryan Cotterell, Bernhard Schölkopf, Abulhair Saparov, Mrinmaya Sachan

We construct tests for each one in order to understand whether current LLMs display the same cognitive biases as children in these steps.

no code implementations • 12 Jan 2024 • Jan Schneider, Pierre Schumacher, Simon Guist, Le Chen, Daniel Häufle, Bernhard Schölkopf, Dieter Büchler

Policy gradient methods hold great potential for solving complex continuous control tasks.

no code implementations • 11 Jan 2024 • Partha Ghosh, Soubhik Sanyal, Cordelia Schmid, Bernhard Schölkopf

To capture long spatio-temporal dependencies, our approach incorporates a hybrid explicit-implicit tri-plane representation inspired by 3D-aware generative frameworks developed for three-dimensional object representation and employs a single latent code to model an entire video clip.

no code implementations • 20 Dec 2023 • Shubhangi Ghosh, Luigi Gresele, Julius von Kügelgen, Michel Besserve, Bernhard Schölkopf

As typical in ICA, previous work focused on the case with an equal number of latent components and observed mixtures.

no code implementations • 13 Dec 2023 • Timothy D. Gebhard, Jonas Wildberger, Maximilian Dax, Daniel Angerhausen, Sascha P. Quanz, Bernhard Schölkopf

Atmospheric retrievals (AR) characterize exoplanets by estimating atmospheric parameters from observed light spectra, typically by framing the task as a Bayesian inference problem.

1 code implementation • NeurIPS 2023 • Zhijing Jin, Yuen Chen, Felix Leeb, Luigi Gresele, Ojasv Kamal, Zhiheng Lyu, Kevin Blin, Fernando Gonzalez Adauto, Max Kleiman-Weiner, Mrinmaya Sachan, Bernhard Schölkopf

Much of the existing work in natural language processing (NLP) focuses on evaluating commonsense causal reasoning in LLMs, thus failing to assess whether a model can perform causal inference in accordance with a set of well-defined formal rules.

no code implementations • CVPR 2024 • Gege Gao, Weiyang Liu, Anpei Chen, Andreas Geiger, Bernhard Schölkopf

As pretrained text-to-image diffusion models become increasingly powerful, recent efforts have been made to distill knowledge from these text-to-image pretrained models for optimizing a text-guided 3D model.

1 code implementation • 30 Nov 2023 • Armin Kekić, Bernhard Schölkopf, Michel Besserve

Why does a phenomenon occur?

1 code implementation • 15 Nov 2023 • David F. Jenny, Yann Billeter, Mrinmaya Sachan, Bernhard Schölkopf, Zhijing Jin

To operationalize this framework, we propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the LLM decision process.

no code implementations • 15 Nov 2023 • Cian Eastwood, Julius von Kügelgen, Linus Ericsson, Diane Bouchacourt, Pascal Vincent, Bernhard Schölkopf, Mark Ibrahim

Self-supervised representation learning often uses data augmentations to induce some invariance to "style" attributes of the data.

1 code implementation • 10 Nov 2023 • Weiyang Liu, Zeju Qiu, Yao Feng, Yuliang Xiu, Yuxuan Xue, Longhui Yu, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf

We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT).

1 code implementation • 5 Nov 2023 • Ishan Kumar, Zhijing Jin, Ehsan Mokhtarian, Siyuan Guo, Yuen Chen, Mrinmaya Sachan, Bernhard Schölkopf

Thus, we propose CausalCite, a new way to measure the significance of a paper by assessing the causal impact of the paper on its follow-up papers.

1 code implementation • 26 Oct 2023 • Lars Lorch, Andreas Krause, Bernhard Schölkopf

We develop a novel approach towards causal inference.

no code implementations • 23 Oct 2023 • Zhen Liu, Yao Feng, Yuliang Xiu, Weiyang Liu, Liam Paull, Michael J. Black, Bernhard Schölkopf

Recent work has focused on the former, and methods for reconstructing open surfaces do not support fast reconstruction with material and lighting or unconditional generative modelling.

1 code implementation • ICCV 2023 • Yandong Wen, Weiyang Liu, Yao Feng, Bhiksha Raj, Rita Singh, Adrian Weller, Michael J. Black, Bernhard Schölkopf

In this paper, we focus on a general yet important learning problem, pairwise similarity learning (PSL).

1 code implementation • 11 Oct 2023 • Klaus-Rudolf Kladny, Julius von Kügelgen, Bernhard Schölkopf, Michael Muehlebach

Counterfactuals answer questions of what would have been observed under altered circumstances and can therefore offer valuable insights.

no code implementations • 27 Sep 2023 • Léon Bottou, Bernhard Schölkopf

Many believe that Large Language Models (LLMs) open the era of Artificial Intelligence (AI).

no code implementations • 13 Sep 2023 • Jan Schneider, Pierre Schumacher, Daniel Häufle, Bernhard Schölkopf, Dieter Büchler

Reinforcement learning~(RL) is a versatile framework for learning to solve complex real-world tasks.

1 code implementation • 6 Sep 2023 • Timothy D. Gebhard, Daniel Angerhausen, Björn S. Konrad, Eleonora Alei, Sascha P. Quanz, Bernhard Schölkopf

When training and evaluating our method on two publicly available datasets of self-consistent PT profiles, we find that our method achieves, on average, better fit quality than existing baseline methods, despite using fewer parameters.

1 code implementation • NeurIPS 2023 • Laurence I. Midgley, Vincent Stimper, Javier Antorán, Emile Mathieu, Bernhard Schölkopf, José Miguel Hernández-Lobato

Coupling normalizing flows allow for fast sampling and density evaluation, making them the tool of choice for probabilistic modeling of physical systems.

no code implementations • 15 Aug 2023 • Nico Gürtler, Felix Widmaier, Cansu Sancaktar, Sebastian Blaes, Pavel Kolev, Stefan Bauer, Manuel Wüthrich, Markus Wulfmeier, Martin Riedmiller, Arthur Allshire, Qiang Wang, Robert McCarthy, Hangyeol Kim, Jongchan Baek, Wookyong Kwon, Shanliang Qian, Yasunori Toshimitsu, Mike Yan Michelis, Amirhossein Kazemipour, Arman Raayatsanati, Hehui Zheng, Barnabas Gavin Cangan, Bernhard Schölkopf, Georg Martius

For this reason, a large part of the reinforcement learning (RL) community uses simulators to develop and benchmark algorithms.

2 code implementations • 28 Jul 2023 • Nico Gürtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Bernhard Schölkopf, Georg Martius

To coordinate the efforts of the research community toward tackling this problem, we propose a benchmark including: i) a large collection of data for offline learning from a dexterous manipulation platform on two tasks, obtained with capable RL agents trained in simulation; ii) the option to execute learned policies on a real-world robotic system and a simulation for efficient debugging.

no code implementations • 19 Jul 2023 • Cian Eastwood, Shashank Singh, Andrei Liviu Nicolicioiu, Marin Vlastelica, Julius von Kügelgen, Bernhard Schölkopf

To avoid failures on out-of-distribution data, recent works have sought to extract features that have an invariant or stable relationship with the label across domains, discarding "spurious" or unstable features whose relationship with the label changes across domains.

1 code implementation • 14 Jun 2023 • Aaron Spieler, Nasim Rahaman, Georg Martius, Bernhard Schölkopf, Anna Levina

Biological cortical neurons are remarkably sophisticated computational devices, temporally integrating their vast synaptic input over an intricate dendritic tree, subject to complex, nonlinearly interacting internal biological processes.

Ranked #1 on Time Series on neuronIO

1 code implementation • NeurIPS 2023 • Zeju Qiu, Weiyang Liu, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, Bernhard Schölkopf

To tackle this challenge, we introduce a principled finetuning method -- Orthogonal Finetuning (OFT), for adapting text-to-image diffusion models to downstream tasks.

1 code implementation • 9 Jun 2023 • Zhijing Jin, Jiarui Liu, Zhiheng Lyu, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona Diab, Bernhard Schölkopf

In this work, we propose the first benchmark dataset to test the pure causal inference skills of large language models (LLMs).

1 code implementation • 9 Jun 2023 • Klaus-Rudolf Kladny, Julius von Kügelgen, Bernhard Schölkopf, Michael Muehlebach

We study causal effect estimation from a mixture of observational and interventional data in a confounded linear regression model with multivariate treatments.

1 code implementation • 6 Jun 2023 • Alexander Immer, Tycho F. A. van der Ouderaa, Mark van der Wilk, Gunnar Rätsch, Bernhard Schölkopf

Recent works show that Bayesian model selection with Laplace approximations can allow to optimize such hyperparameters just like standard neural network parameters using gradients and on the training data.

no code implementations • NeurIPS 2023 • Simon Buchholz, Goutham Rajendran, Elan Rosenfeld, Bryon Aragam, Bernhard Schölkopf, Pradeep Ravikumar

We study the problem of learning causal representations from unknown, latent interventions in a general setting, where the latent distribution is Gaussian but the mixing function is completely general.

1 code implementation • NeurIPS 2023 • Julius von Kügelgen, Michel Besserve, Liang Wendong, Luigi Gresele, Armin Kekić, Elias Bareinboim, David M. Blei, Bernhard Schölkopf

We study the fundamental setting of two causal variables and prove that the observational distribution and one perfect intervention per node suffice for identifiability, subject to a genericity condition.

no code implementations • 1 Jun 2023 • Alizée Pace, Hugo Yèche, Bernhard Schölkopf, Gunnar Rätsch, Guy Tennenholtz

A prominent challenge of offline reinforcement learning (RL) is the issue of hidden confounding: unobserved variables may influence both the actions taken by the agent and the observed outcomes.

1 code implementation • 29 May 2023 • Justus Mattern, FatemehSadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick

To investigate whether this fragility provides a layer of safety, we propose and evaluate neighbourhood attacks, which compare model scores for a given sample to scores of synthetically generated neighbour texts and therefore eliminate the need for access to the training data distribution.

1 code implementation • NeurIPS 2023 • Maximilian Dax, Jonas Wildberger, Simon Buchholz, Stephen R. Green, Jakob H. Macke, Bernhard Schölkopf

Neural posterior estimation methods based on discrete normalizing flows have become established tools for simulation-based inference (SBI), but scaling them to high-dimensional problems can be challenging.

1 code implementation • NeurIPS 2023 • Liang Wendong, Armin Kekić, Julius von Kügelgen, Simon Buchholz, Michel Besserve, Luigi Gresele, Bernhard Schölkopf

As a corollary, this interventional perspective also leads to new identifiability results for nonlinear ICA -- a special case of CauCA with an empty graph -- requiring strictly fewer datasets than previous results.

no code implementations • 23 May 2023 • Jack Brady, Roland S. Zimmermann, Yash Sharma, Bernhard Schölkopf, Julius von Kügelgen, Wieland Brendel

Under this generative process, we prove that the ground-truth object representations can be identified by an invertible and compositional inference model, even in the presence of dependencies between objects.

1 code implementation • 23 May 2023 • Yuxin Ren, Qipeng Guo, Zhijing Jin, Shauli Ravfogel, Mrinmaya Sachan, Bernhard Schölkopf, Ryan Cotterell

Transformer models bring propelling advances in various NLP tasks, thus inducing lots of interpretability research on the learned representations of the models.

no code implementations • NeurIPS 2023 • Junhyung Park, Simon Buchholz, Bernhard Schölkopf, Krikamol Muandet

Causality is a central concept in a wide range of research areas, yet there is still no universally agreed axiomatisation of causality.

1 code implementation • 18 May 2023 • Heiner Kremer, Yassine Nemmour, Bernhard Schölkopf, Jia-Jie Zhu

We provide a variant of our estimator for conditional moment restrictions and show that it is asymptotically first-order optimal for such problems.

no code implementations • 16 May 2023 • Sidak Pal Singh, Thomas Hofmann, Bernhard Schölkopf

While Convolutional Neural Networks (CNNs) have long been investigated and applied, as well as theorized, we aim to provide a slightly different perspective into their nature -- through the perspective of their Hessian maps.

1 code implementation • 9 May 2023 • Fernando Gonzalez, Zhijing Jin, Bernhard Schölkopf, Tom Hope, Mrinmaya Sachan, Rada Mihalcea

Using state-of-the-art NLP models, we address each of these tasks and use them on the entire ACL Anthology, resulting in a visualization workspace that gives researchers a comprehensive overview of the field of NLP4SG.

no code implementations • NeurIPS 2023 • Marco Fumero, Florian Wenzel, Luca Zancato, Alessandro Achille, Emanuele Rodolà, Stefano Soatto, Bernhard Schölkopf, Francesco Locatello

Recovering the latent factors of variation of high dimensional data has so far focused on simple synthetic settings.

no code implementations • 16 Apr 2023 • Siyuan Guo, Jonas Wildberger, Bernhard Schölkopf

The ability of an agent to do well in new environments is a critical aspect of intelligence.

1 code implementation • 16 Mar 2023 • Andrei Paleyes, Siyuan Guo, Bernhard Schölkopf, Neil D. Lawrence

Component-based development is one of the core principles behind modern software engineering practices.

1 code implementation • 11 Mar 2023 • Weiyang Liu, Longhui Yu, Adrian Weller, Bernhard Schölkopf

We then use hyperspherical uniformity (which characterizes the degree of uniformity on the unit hypersphere) as a unified framework to quantify these two objectives.

no code implementations • 3 Mar 2023 • Simon Guist, Jan Schneider, Alexander Dittrich, Vincent Berenz, Bernhard Schölkopf, Dieter Büchler

Reinforcement learning has shown great potential in solving complex tasks when large amounts of data can be generated with little effort.

no code implementations • 21 Feb 2023 • Uddeshya Upadhyay, Jae Myung Kim, Cordelia Schmidt, Bernhard Schölkopf, Zeynep Akata

Recent advances in deep learning have shown that uncertainty estimation is becoming increasingly important in applications such as medical imaging, natural language processing, and autonomous systems.

no code implementations • 10 Feb 2023 • Jonas Wildberger, Siyuan Guo, Arnab Bhattacharyya, Bernhard Schölkopf

Modern machine learning approaches excel in static settings where a large amount of i. i. d.

2 code implementations • 7 Feb 2023 • Ahmad-Reza Ehyaei, Amir-Hossein Karimi, Bernhard Schölkopf, Setareh Maghsudi

Algorithmic recourse aims to disclose the inner workings of the black-box decision process in situations where decisions have significant consequences, by providing recommendations to empower beneficiaries to achieve a more favorable outcome.

no code implementations • 31 Jan 2023 • Soledad Villar, David W. Hogg, Weichi Yao, George A. Kevrekidis, Bernhard Schölkopf

We discuss links to causal modeling, and argue that the implementation of passive symmetries is particularly valuable when the goal of the learning problem is to generalize out of sample.

1 code implementation • 27 Jan 2023 • Flavio Schneider, Ojasv Kamal, Zhijing Jin, Bernhard Schölkopf

Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another "language" of communication -- music.

1 code implementation • 26 Jan 2023 • Vincent Stimper, David Liu, Andrew Campbell, Vincent Berenz, Lukas Ryll, Bernhard Schölkopf, José Miguel Hernández-Lobato

It allows to build normalizing flow models from a suite of base distributions, flow layers, and neural networks.

no code implementations • 20 Jan 2023 • Simon Buchholz, Jonas M. Kübler, Bernhard Schölkopf

Here we introduce further bandit models where we only have limited access to the randomness of the rewards, but we can still query the arms in superposition.

1 code implementation • 12 Jan 2023 • Yuejiang Liu, Alexandre Alahi, Chris Russell, Max Horn, Dominik Zietlow, Bernhard Schölkopf, Francesco Locatello

Recent years have seen a surge of interest in learning high-level causal representations from low-level image pairs under interventions.

1 code implementation • 20 Dec 2022 • Yuen Chen, Vethavikashini Chithrra Raghuram, Justus Mattern, Mrinmaya Sachan, Rada Mihalcea, Bernhard Schölkopf, Zhijing Jin

Generated texts from large language models (LLMs) have been shown to exhibit a variety of harmful, human-like biases against various demographics.

1 code implementation • 14 Dec 2022 • Armin Kekić, Jonas Dehning, Luigi Gresele, Julius von Kügelgen, Viola Priesemann, Bernhard Schölkopf

Early on during a pandemic, vaccine availability is limited, requiring prioritisation of different population groups.

no code implementations • 13 Dec 2022 • Amir-Hossein Karimi, Krikamol Muandet, Simon Kornblith, Bernhard Schölkopf, Been Kim

Our work borrows tools from causal inference to systematically assay this relationship.

no code implementations • 16 Nov 2022 • Jonas Wildberger, Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Pürrer, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf

Deep learning techniques for gravitational-wave parameter estimation have emerged as a fast alternative to standard samplers $\unicode{x2013}$ producing results of comparable accuracy.

3 code implementations • 7 Nov 2022 • Amin Abyaneh, Nino Scherrer, Patrick Schwab, Stefan Bauer, Bernhard Schölkopf, Arash Mehrjou

We propose FedCDI, a federated framework for inferring causal structures from distributed data containing interventional samples.

no code implementations • 4 Nov 2022 • Nasim Rahaman, Martin Weiss, Frederik Träuble, Francesco Locatello, Alexandre Lacoste, Yoshua Bengio, Chris Pal, Li Erran Li, Bernhard Schölkopf

Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications.

1 code implementation • 31 Oct 2022 • Zeju Qiu, Weiyang Liu, Tim Z. Xiao, Zhen Liu, Umang Bhatt, Yucen Luo, Adrian Weller, Bernhard Schölkopf

We consider the problem of iterative machine teaching, where a teacher sequentially provides examples based on the status of a learner under a discrete input space (i. e., a pool of finite samples), which greatly limits the teacher's capability.

no code implementations • 29 Oct 2022 • Ziyu Wang, Yucen Luo, Yueru Li, Jun Zhu, Bernhard Schölkopf

For nonparametric conditional moment models, efficient estimation often relies on preimposed conditions on various measures of ill-posedness of the hypothesis space, which are hard to validate when flexible models are used.

1 code implementation • 21 Oct 2022 • Alessandro Stolfo, Zhijing Jin, Kumar Shridhar, Bernhard Schölkopf, Mrinmaya Sachan

By grounding the behavioral analysis in a causal graph describing an intuitive reasoning process, we study the behavior of language models in terms of robustness and sensitivity to direct interventions in the input space.

no code implementations • 14 Oct 2022 • Nasim Rahaman, Martin Weiss, Francesco Locatello, Chris Pal, Yoshua Bengio, Bernhard Schölkopf, Li Erran Li, Nicolas Ballas

Recent work has seen the development of general purpose neural architectures that can be trained to perform tasks across diverse data modalities.

1 code implementation • 13 Oct 2022 • Alexander Immer, Christoph Schultheiss, Julia E. Vogt, Bernhard Schölkopf, Peter Bühlmann, Alexander Marx

We study the class of location-scale or heteroscedastic noise models (LSNMs), in which the effect $Y$ can be written as a function of the cause $X$ and a noise source $N$ independent of $X$, which may be scaled by a positive function $g$ over the cause, i. e., $Y = f(X) + g(X)N$.

1 code implementation • 11 Oct 2022 • Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Pürrer, Jonas Wildberger, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf

This shows a median sample efficiency of $\approx 10\%$ (two orders-of-magnitude better than standard samplers) as well as a ten-fold reduction in the statistical uncertainty in the log evidence.

1 code implementation • 4 Oct 2022 • Zhijing Jin, Sydney Levine, Fernando Gonzalez, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan, Rada Mihalcea, Josh Tenenbaum, Bernhard Schölkopf

Using a state-of-the-art large language model (LLM) as a basis, we propose a novel moral chain of thought (MORALCOT) prompting strategy that combines the strengths of LLMs with theories of moral reasoning developed in cognitive science to predict human moral judgments.

1 code implementation • 1 Oct 2022 • Cian Eastwood, Andrei Liviu Nicolicioiu, Julius von Kügelgen, Armin Kekić, Frederik Träuble, Andrea Dittadi, Bernhard Schölkopf

In representation learning, a common approach is to seek representations which disentangle the underlying factors of variation.

4 code implementations • 29 Sep 2022 • Maximilian Seitzer, Max Horn, Andrii Zadaianchuk, Dominik Zietlow, Tianjun Xiao, Carl-Johann Simon-Gabriel, Tong He, Zheng Zhang, Bernhard Schölkopf, Thomas Brox, Francesco Locatello

Humans naturally decompose their environment into entities at the appropriate level of abstraction to act in the world.

no code implementations • 12 Aug 2022 • Simon Buchholz, Michel Besserve, Bernhard Schölkopf

Several families of spurious solutions fitting perfectly the data, but that do not correspond to the ground truth factors can be constructed in generic settings.

4 code implementations • 3 Aug 2022 • Laurence Illing Midgley, Vincent Stimper, Gregor N. C. Simm, Bernhard Schölkopf, José Miguel Hernández-Lobato

Normalizing flows are tractable density models that can approximate complicated target distributions, e. g. Boltzmann distributions of physical systems.

1 code implementation • 25 Jul 2022 • Hamza Keurti, Hsiao-Ru Pan, Michel Besserve, Benjamin F. Grewe, Bernhard Schölkopf

How can agents learn internal models that veridically represent interactions with the real world is a largely open question.

1 code implementation • 22 Jul 2022 • Frederik Träuble, Anirudh Goyal, Nasim Rahaman, Michael Mozer, Kenji Kawaguchi, Yoshua Bengio, Bernhard Schölkopf

Deep neural networks perform well on classification tasks where data streams are i. i. d.

2 code implementations • 20 Jul 2022 • Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed Hassani, George J. Pappas, Bernhard Schölkopf

By minimizing the $\alpha$-quantile of predictor's risk distribution over domains, QRM seeks predictors that perform well with probability $\alpha$.

no code implementations • 20 Jul 2022 • Weiyang Liu, Zhen Liu, Liam Paull, Adrian Weller, Bernhard Schölkopf

This paper considers the problem of unsupervised 3D object reconstruction from in-the-wild single-view images.

1 code implementation • 19 Jul 2022 • Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello

Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.

Adversarial Robustness
Out-of-Distribution Generalization
**+1**

no code implementations • 13 Jul 2022 • Joanna Sliwa, Shubhangi Ghosh, Vincent Stimper, Luigi Gresele, Bernhard Schölkopf

One aim of representation learning is to recover the original latent code that generated the data, a task which requires additional information or inductive biases.

1 code implementation • 11 Jul 2022 • Heiner Kremer, Jia-Jie Zhu, Krikamol Muandet, Bernhard Schölkopf

Important problems in causal inference, economics, and, more generally, robust machine learning can be expressed as conditional moment restrictions, but estimation becomes challenging as it requires solving a continuum of unconditional moment restrictions.

no code implementations • 22 Jun 2022 • Anson Lei, Bernhard Schölkopf, Ingmar Posner

In doing so, VCD significantly extends the capabilities of the current state-of-the-art in latent world models while also comparing favourably in terms of prediction accuracy.

3 code implementations • 17 Jun 2022 • Jonas M. Kübler, Vincent Stimper, Simon Buchholz, Krikamol Muandet, Bernhard Schölkopf

Two-sample tests are important in statistics and machine learning, both as tools for scientific discovery as well as to detect distribution shifts.

no code implementations • 7 Jun 2022 • Aniket Das, Bernhard Schölkopf, Michael Muehlebach

We obtain tight convergence rates for RR and SO and demonstrate that these strategies lead to faster convergence than uniform sampling.

1 code implementation • 6 Jun 2022 • Patrik Reizinger, Luigi Gresele, Jack Brady, Julius von Kügelgen, Dominik Zietlow, Bernhard Schölkopf, Georg Martius, Wieland Brendel, Michel Besserve

Leveraging self-consistency, we show that the ELBO converges to a regularized log-likelihood.

1 code implementation • 3 Jun 2022 • Alexander Hägele, Jonas Rothfuss, Lars Lorch, Vignesh Ram Somnath, Bernhard Schölkopf, Andreas Krause

Inferring causal structures from experimentation is a central task in many domains.

1 code implementation • 25 May 2022 • Lars Lorch, Scott Sussex, Jonas Rothfuss, Andreas Krause, Bernhard Schölkopf

Rather than searching over structures, we train a variational inference model to directly predict the causal structure from observational or interventional data.

1 code implementation • NAACL 2022 • Jingwei Ni, Zhijing Jin, Markus Freitag, Mrinmaya Sachan, Bernhard Schölkopf

We show that these two factors have a large causal effect on the MT performance, in addition to the test-model direction mismatch highlighted by existing work on the impact of translationese.

1 code implementation • 7 Apr 2022 • Timothy D. Gebhard, Markus J. Bonse, Sascha P. Quanz, Bernhard Schölkopf

Our HSR-based method provides an alternative, flexible and promising approach to the challenge of modeling and subtracting the stellar PSF and systematic noise in exoplanet imaging data.

no code implementations • 1 Apr 2022 • Bernhard Schölkopf, Julius von Kügelgen

We describe basic ideas underlying research to build and understand artificially intelligent systems: from symbolic approaches via statistical learning to interventional models relying on concepts of causality.

1 code implementation • NeurIPS 2023 • Siyuan Guo, Viktor Tóth, Bernhard Schölkopf, Ferenc Huszár

We then present our main identifiability theorem, which shows that given data from an ICM generative process, its unique causal structure can be identified through performing conditional independence tests.

no code implementations • ICLR 2022 • Sidak Pal Singh, Aurelien Lucchi, Thomas Hofmann, Bernhard Schölkopf

`Double descent' delineates the generalization behaviour of models depending on the regime they belong to: under- or over-parameterized.

1 code implementation • CVPR 2022 • Dominik Zietlow, Michael Lohaus, Guha Balakrishnan, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, Chris Russell

Algorithmic fairness is frequently motivated in terms of a trade-off in which overall performance is decreased so as to improve performance on disadvantaged groups where the algorithm would otherwise be less accurate.

no code implementations • 8 Mar 2022 • Paul Rolland, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard Schölkopf, Dominik Janzing, Francesco Locatello

This paper demonstrates how to recover causal graphs from the score of the data distribution in non-linear additive (Gaussian) noise models.

1 code implementation • 3 Mar 2022 • Panagiotis Tigas, Yashas Annadani, Andrew Jesson, Bernhard Schölkopf, Yarin Gal, Stefan Bauer

Existing methods in experimental design for causal discovery from limited data either rely on linear assumptions for the SCM or select only the intervention target.

2 code implementations • 28 Feb 2022 • Zhijing Jin, Abhinav Lalwani, Tejas Vaidhya, Xiaoyu Shen, Yiwen Ding, Zhiheng Lyu, Mrinmaya Sachan, Rada Mihalcea, Bernhard Schölkopf

In this paper, we propose the task of logical fallacy detection, and provide a new dataset (Logic) of logical fallacies generally found in text, together with an additional challenge set for detecting logical fallacies in climate change claims (LogicClimate).

no code implementations • 14 Feb 2022 • Shubhangi Ghosh, Luigi Gresele, Julius von Kügelgen, Michel Besserve, Bernhard Schölkopf

Model identifiability is a desirable property in the context of unsupervised representation learning.

1 code implementation • 2 Feb 2022 • Luigi Gresele, Julius von Kügelgen, Jonas M. Kübler, Elke Kirschbaum, Bernhard Schölkopf, Dominik Janzing

We introduce an approach to counterfactual inference based on merging information from multiple datasets.

no code implementations • 31 Jan 2022 • Davide Mambelli, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf, Francesco Locatello

Although reinforcement learning has seen remarkable progress over the last years, solving robust dexterous object-manipulation tasks in multi-object settings remains a challenge.

no code implementations • 15 Jan 2022 • Arash Mehrjou, Ashkan Soleymani, Stefan Bauer, Bernhard Schölkopf

Model-free and model-based reinforcement learning are two ends of a spectrum.

1 code implementation • 21 Dec 2021 • Ricardo Dominguez-Olmedo, Amir-Hossein Karimi, Bernhard Schölkopf

Algorithmic recourse seeks to provide actionable recommendations for individuals to overcome unfavorable classification outcomes from automated decision-making systems.

1 code implementation • 10 Dec 2021 • Michel Besserve, Bernhard Schölkopf

Complex systems often contain feedback loops that can be described as cyclic causal models.

1 code implementation • CVPR 2022 • HANLIN ZHANG, Yi-Fan Zhang, Weiyang Liu, Adrian Weller, Bernhard Schölkopf, Eric P. Xing

To tackle this challenge, we first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG).

1 code implementation • ICLR 2022 • Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Deistler, Bernhard Schölkopf, Jakob H. Macke

We here describe an alternative method to incorporate equivariances under joint transformations of parameters and data.

1 code implementation • 29 Oct 2021 • Vincent Stimper, Bernhard Schölkopf, José Miguel Hernández-Lobato

Normalizing flows are a popular class of models for approximating probability distributions.

Ranked #49 on Image Generation on CIFAR-10 (bits/dimension metric)

no code implementations • 29 Oct 2021 • Sumedh A Sontakke, Stephen Iota, Zizhao Hu, Arash Mehrjou, Laurent Itti, Bernhard Schölkopf

Extending the successes in supervised learning methods to the reinforcement learning (RL) setting, however, is difficult due to the data generating process - RL agents actively query their environment for data, and the data are a function of the policy followed by the agent.

Out of Distribution (OOD) Detection Reinforcement Learning (RL)

no code implementations • 29 Oct 2021 • Michel Besserve, Naji Shajarisales, Dominik Janzing, Bernhard Schölkopf

A new perspective has been provided based on the principle of Independence of Causal Mechanisms (ICM), leading to the Spectral Independence Criterion (SIC), postulating that the power spectral density (PSD) of the cause time series is uncorrelated with the squared modulus of the frequency response of the filter generating the effect.

no code implementations • NeurIPS 2021 • Weiyang Liu, Zhen Liu, Hanchen Wang, Liam Paull, Bernhard Schölkopf, Adrian Weller

In this paper, we consider the problem of iterative machine teaching, where a teacher provides examples sequentially based on the current iterative learner.

no code implementations • 26 Oct 2021 • Yassine Nemmour, Bernhard Schölkopf, Jia-Jie Zhu

We provide a functional view of distributional robustness motivated by robust statistics and functional analysis.

1 code implementation • 13 Oct 2021 • Matthias Tangemann, Steffen Schneider, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Thomas Brox, Matthias Kümmerer, Matthias Bethge, Bernhard Schölkopf

Learning generative object models from unlabelled videos is a long standing problem and required for causal scene modeling.

no code implementations • NeurIPS 2021 • Nasim Rahaman, Muhammad Waleed Gondal, Shruti Joshi, Peter Gehler, Yoshua Bengio, Francesco Locatello, Bernhard Schölkopf

Modern neural network architectures can leverage large amounts of data to generalize well within the training distribution.

no code implementations • 12 Oct 2021 • Biwei Huang, Chaochao Lu, Liu Leqi, José Miguel Hernández-Lobato, Clark Glymour, Bernhard Schölkopf, Kun Zhang

Perceived signals in real-world scenarios are usually high-dimensional and noisy, and finding and using their representation that contains essential and sufficient information required by downstream decision-making tasks will help improve computational efficiency and generalization ability in the tasks.

no code implementations • ICLR 2022 • Osama Makansi, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Dominik Janzing, Thomas Brox, Bernhard Schölkopf

Applying this procedure to state-of-the-art trajectory prediction methods on standard benchmark datasets shows that they are, in fact, unable to reason about interactions.

1 code implementation • EMNLP 2021 • Zhijing Jin, Julius von Kügelgen, Jingwei Ni, Tejas Vaidhya, Ayush Kaushal, Mrinmaya Sachan, Bernhard Schölkopf

The principle of independent causal mechanisms (ICM) states that generative processes of real world data consist of independent modules which do not influence or inform each other.

no code implementations • NeurIPS Workshop SVRHM 2021 • Yukun Chen, Andrea Dittadi, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf

Disentanglement is hypothesized to be beneficial towards a number of downstream tasks.

no code implementations • 5 Oct 2021 • Lukas Kondmann, Aysim Toker, Sudipan Saha, Bernhard Schölkopf, Laura Leal-Taixé, Xiao Xiang Zhu

It uses this model to analyze differences in the pixel and its spatial context-based predictions in subsequent time periods for change detection.

no code implementations • ICLR 2022 • Chaochao Lu, Yuhuai Wu, José Miguel Hernández-Lobato, Bernhard Schölkopf

Extensive experiments on both synthetic and real-world datasets show that our approach outperforms a variety of baseline methods.

no code implementations • 29 Sep 2021 • Giulia Lanzillotta, Felix Leeb, Stefan Bauer, Bernhard Schölkopf

Autoencoders have played a crucial role in the field of representation learning since its inception, proving to be a flexible learning scheme able to accommodate various notions of optimality of the representation.

1 code implementation • 13 Sep 2021 • Hsiao-Ru Pan, Nico Gürtler, Alexander Neitz, Bernhard Schölkopf

The predominant approach in reinforcement learning is to assign credit to actions based on the expected return.

1 code implementation • 6 Sep 2021 • Nino Scherrer, Olexa Bilaniuk, Yashas Annadani, Anirudh Goyal, Patrick Schwab, Bernhard Schölkopf, Michael C. Mozer, Yoshua Bengio, Stefan Bauer, Nan Rosemary Ke

Discovering causal structures from data is a challenging inference problem of fundamental importance in all areas of science.

1 code implementation • ICLR 2022 • Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.

no code implementations • ICLR 2022 • Andrea Dittadi, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

By training 240 representations and over 10, 000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents.

1 code implementation • ICLR 2022 • Cian Eastwood, Ian Mason, Christopher K. I. Williams, Bernhard Schölkopf

Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain.

no code implementations • NeurIPS 2021 • Frederik Träuble, Julius von Kügelgen, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, Peter Gehler

; and (ii) if the new predictions differ from the current ones, should we update?

1 code implementation • 1 Jul 2021 • Andrea Dittadi, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole Winther, Francesco Locatello

The idea behind object-centric representation learning is that natural scenes can better be modeled as compositions of objects and their relations as opposed to distributed representations.

1 code implementation • 30 Jun 2021 • Felix Leeb, Stefan Bauer, Michel Besserve, Bernhard Schölkopf

Autoencoders exhibit impressive abilities to embed the data manifold into a low-dimensional latent space, making them a staple of representation learning methods.

no code implementations • 24 Jun 2021 • Diego Agudelo-España, Yassine Nemmour, Bernhard Schölkopf, Jia-Jie Zhu

Random features is a powerful universal function approximator that inherits the theoretical rigor of kernel methods and can scale up to modern learning tasks.

1 code implementation • 23 Jun 2021 • Maximilian Dax, Stephen R. Green, Jonathan Gair, Jakob H. Macke, Alessandra Buonanno, Bernhard Schölkopf

We demonstrate unprecedented accuracy for rapid gravitational-wave parameter estimation with deep learning.

no code implementations • 22 Jun 2021 • Julius von Kügelgen, Nikita Agarwal, Jakob Zeitler, Afsaneh Mastouri, Bernhard Schölkopf

Algorithmic recourse aims to provide actionable recommendations to individuals to obtain a more favourable outcome from an automated decision-making system.

18 code implementations • CVPR 2022 • Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, Peter Gehler

Being able to spot defective parts is a critical component in large-scale industrial manufacturing.

Ranked #3 on Anomaly Detection on AeBAD-V

no code implementations • ICML Workshop URL 2021 • Frederik Träuble, Andrea Dittadi, Manuel Wuthrich, Felix Widmaier, Peter Vincent Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

Learning data representations that are useful for various downstream tasks is a cornerstone of artificial intelligence.

Out-of-Distribution Generalization
reinforcement-learning
**+2**

1 code implementation • ICLR 2022 • Yonggang Zhang, Mingming Gong, Tongliang Liu, Gang Niu, Xinmei Tian, Bo Han, Bernhard Schölkopf, Kun Zhang

The adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.

1 code implementation • NeurIPS 2021 • Luigi Gresele, Julius von Kügelgen, Vincent Stimper, Bernhard Schölkopf, Michel Besserve

Specifically, our approach is motivated by thinking of each source as independently influencing the mixing process.

1 code implementation • NeurIPS 2021 • Julius von Kügelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, Francesco Locatello

A common practice is to perform data augmentation via hand-crafted transformations intended to leave the semantics of the data invariant.

Ranked #1 on Image Classification on Causal3DIdent

1 code implementation • NeurIPS 2021 • Jonas M. Kübler, Simon Buchholz, Bernhard Schölkopf

Quantum computers offer the possibility to efficiently compute inner products of exponentially large density operators that are classically hard to compute.