1 code implementation • 21 Apr 2024 • Xinwei Shen, Nicolai Meinshausen

Dimension reduction techniques usually lose information in the sense that reconstructed data are not identical to the original data.

1 code implementation • 3 Jul 2023 • Xinwei Shen, Nicolai Meinshausen

An engression model is generative in the sense that we can sample from the fitted conditional distribution and is also suitable for high-dimensional outcomes.

no code implementations • 11 Feb 2023 • Jeffrey Näf, Corinne Emmenegger, Peter Bühlmann, Nicolai Meinshausen

The Distributional Random Forest (DRF) is a recently introduced Random Forest algorithm to estimate multivariate conditional distributions.

1 code implementation • 9 Dec 2022 • Enikő Székely, Sebastian Sippel, Nicolai Meinshausen, Guillaume Obozinski, Reto Knutti

Fingerprints are key tools in climate change detection and attribution (D&A) that are used to determine whether changes in observations are different from internal climate variability (detection), and whether observed changes can be assigned to specific external drivers (attribution).

2 code implementations • 21 Apr 2022 • Andrew Jesson, Alyson Douglas, Peter Manshausen, Maëlys Solal, Nicolai Meinshausen, Philip Stier, Yarin Gal, Uri Shalit

Estimating the effects of continuous-valued interventions from observational data is a critically important task for climate science, healthcare, and economics.

no code implementations • 19 Oct 2021 • Drago Plečko, Nicolas Bennett, Nicolai Meinshausen

Machine learning algorithms are useful for various predictions tasks, but they can also learn how to discriminate, based on gender, race or other sensitive attributes.

no code implementations • 12 Jul 2021 • Michael Moor, Nicolas Bennet, Drago Plecko, Max Horn, Bastian Rieck, Nicolai Meinshausen, Peter Bühlmann, Karsten Borgwardt

Here, we developed and validated a machine learning (ML) system for the prediction of sepsis in the ICU.

no code implementations • 29 May 2020 • Domagoj Ćevid, Loris Michel, Jeffrey Näf, Nicolai Meinshausen, Peter Bühlmann

Random Forest (Breiman, 2001) is a successful and widely used regression and classification algorithm.

no code implementations • 15 Nov 2019 • Drago Plečko, Nicolai Meinshausen

The data adaptation is based on a presumed counterfactual model for the data.

2 code implementations • 14 Aug 2019 • Nicola Gnecco, Nicolai Meinshausen, Jonas Peters, Sebastian Engelke

Causal questions are omnipresent in many scientific problems.

Methodology

2 code implementations • 18 Jan 2018 • Dominik Rothenhäusler, Nicolai Meinshausen, Peter Bühlmann, Jonas Peters

If anchor regression and least squares provide the same answer (anchor stability), we establish that OLS parameters are invariant under certain distributional changes.

Methodology

no code implementations • ICLR 2018 • Christina Heinze-Deml, Nicolai Meinshausen

If two or more samples share the same class and identifier, (Y, ID)=(y, i), then we treat those samples as counterfactuals under different style interventions on the orthogonal or style features.

1 code implementation • 31 Oct 2017 • Christina Heinze-Deml, Nicolai Meinshausen

Our goal is to minimize a loss that is robust under changes in the distribution of these style features.

no code implementations • 28 Jun 2017 • Christina Heinze-Deml, Marloes H. Maathuis, Nicolai Meinshausen

Causal models can be viewed as a special class of graphical models that not only represent the distribution of the observed system but also the distributions under external interventions.

Methodology

1 code implementation • 26 Jun 2017 • Christina Heinze-Deml, Jonas Peters, Nicolai Meinshausen

In this work, we present and evaluate an array of methods for nonlinear and nonparametric versions of ICP for learning the causal parents of given target variables.

Methodology

no code implementations • 1 Mar 2017 • Christina Heinze-Deml, Brian McWilliams, Nicolai Meinshausen

Privacy is crucial in many applications of machine learning.

no code implementations • NeurIPS 2016 • Gabriel Krummenacher, Brian McWilliams, Yannic Kilcher, Joachim M. Buhmann, Nicolai Meinshausen

We show that the regret of Ada-LR is close to the regret of full-matrix AdaGrad which can have an up-to exponentially smaller dependence on the dimension than the diagonal variant.

1 code implementation • 17 Oct 2016 • Gian-Andrea Thanei, Nicolai Meinshausen, Rajen D. Shah

When performing regression on a dataset with $p$ variables, it is often of interest to go beyond using main linear effects and include interactions as products between individual variables.

1 code implementation • NeurIPS 2015 • Dominik Rothenhäusler, Christina Heinze, Jonas Peters, Nicolai Meinshausen

We propose a simple method to learn linear causal cyclic models in the presence of latent variables.

no code implementations • 8 Jun 2015 • Christina Heinze, Brian McWilliams, Nicolai Meinshausen

We present DUAL-LOCO, a communication-efficient algorithm for distributed statistical estimation.

no code implementations • 6 Jan 2015 • Jonas Peters, Peter Bühlmann, Nicolai Meinshausen

In contrast, predictions from a non-causal model can potentially be very wrong if we actively intervene on variables.

Methodology

no code implementations • 13 Jun 2014 • Christina Heinze, Brian McWilliams, Nicolai Meinshausen, Gabriel Krummenacher

We propose LOCO, an algorithm for large-scale ridge regression which distributes the features across workers on a cluster.

no code implementations • 6 Aug 2013 • Rajen D. Shah, Nicolai Meinshausen

Large-scale regression problems where both the number of variables, $p$, and the number of observations, $n$, may be large and in the order of millions or more, are becoming increasingly more common.

no code implementations • 11 Jul 2013 • Aurélie C. Lozano, Nicolai Meinshausen

We propose a minimum distance estimation method for robust regression in sparse high-dimensional settings.

1 code implementation • 25 Mar 2013 • Rajen Dinesh Shah, Nicolai Meinshausen

We show that informative interactions are retained with high probability, and the computational complexity of our procedure is of order $p^\kappa$ for a value of $\kappa$ that can reach values as low as 1 for very sparse data; in many more general settings, it will still beat the exponent $s$ obtained when using a brute force search constrained to order $s$ interactions.

2 code implementations • 17 Sep 2008 • Nicolai Meinshausen, Peter Buehlmann

Estimation of structure, such as in variable selection, graphical modelling or cluster analysis is notoriously difficult, especially for high-dimensional data.

Methodology

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.