Search Results for author: Celestine Mendler-Dünner

Found 15 papers, 4 papers with code

Predicting from Predictions

no code implementations15 Aug 2022 Celestine Mendler-Dünner, Frances Ding, Yixin Wang

Predictions about people, such as their expected educational achievement or their credit risk, can be performative and shape the outcome that they aim to predict.

Performative Power

no code implementations31 Mar 2022 Moritz Hardt, Meena Jagadeesan, Celestine Mendler-Dünner

We introduce the notion of performative power, which measures the ability of a firm operating an algorithmic system, such as a digital content recommendation platform, to steer a population.

Regret Minimization with Performative Feedback

no code implementations1 Feb 2022 Meena Jagadeesan, Tijana Zrnic, Celestine Mendler-Dünner

Our main contribution is an algorithm that achieves regret bounds scaling only with the complexity of the distribution shifts and not that of the reward function.

Alternative Microfoundations for Strategic Classification

no code implementations24 Jun 2021 Meena Jagadeesan, Celestine Mendler-Dünner, Moritz Hardt

When reasoning about strategic behavior in a machine learning context it is tempting to combine standard microfoundations of rational agents with the statistical decision theory underlying classification.

Classification Navigate

Test-time Collective Prediction

no code implementations NeurIPS 2021 Celestine Mendler-Dünner, Wenshuo Guo, Stephen Bates, Michael I. Jordan

An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points.

Revisiting Design Choices in Proximal Policy Optimization

1 code implementation23 Sep 2020 Chloe Ching-Yun Hsu, Celestine Mendler-Dünner, Moritz Hardt

We explain why standard design choices are problematic in these cases, and show that alternative choices of surrogate objectives and policy parameterizations can prevent the failure modes.

Randomized Block-Diagonal Preconditioning for Parallel Learning

no code implementations ICML 2020 Celestine Mendler-Dünner, Aurelien Lucchi

We study preconditioned gradient-based optimization methods where the preconditioning matrix has block-diagonal form.

Differentially Private Stochastic Coordinate Descent

no code implementations12 Jun 2020 Georgios Damaskinos, Celestine Mendler-Dünner, Rachid Guerraoui, Nikolaos Papandreou, Thomas Parnell

In this paper we tackle the challenge of making the stochastic coordinate descent algorithm differentially private.

Stochastic Optimization for Performative Prediction

1 code implementation NeurIPS 2020 Celestine Mendler-Dünner, Juan C. Perdomo, Tijana Zrnic, Moritz Hardt

In performative prediction, the choice of a model influences the distribution of future data, typically through actions taken based on the model's predictions.

Stochastic Optimization

Performative Prediction

1 code implementation ICML 2020 Juan C. Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, Moritz Hardt

When predictions support decisions they may influence the outcome they aim to predict.

SySCD: A System-Aware Parallel Coordinate Descent Algorithm

no code implementations NeurIPS 2019 Nikolas Ioannou, Celestine Mendler-Dünner, Thomas Parnell

In this paper we propose a novel parallel stochastic coordinate descent (SCD) algorithm with convergence guarantees that exhibits strong scalability.

Breadth-first, Depth-next Training of Random Forests

no code implementations15 Oct 2019 Andreea Anghel, Nikolas Ioannou, Thomas Parnell, Nikolaos Papandreou, Celestine Mendler-Dünner, Haris Pozidis

In this paper we analyze, evaluate, and improve the performance of training Random Forest (RF) models on modern CPU architectures.

Addressing Algorithmic Bottlenecks in Elastic Machine Learning with Chicle

no code implementations11 Sep 2019 Michael Kaufmann, Kornilios Kourtis, Celestine Mendler-Dünner, Adrian Schüpbach, Thomas Parnell

To address this, we propose Chicle, a new elastic distributed training framework which exploits the nature of machine learning algorithms to implement elasticity and load balancing without micro-tasks.

BIG-bench Machine Learning Fairness

On Linear Learning with Manycore Processors

1 code implementation2 May 2019 Eliza Wszola, Celestine Mendler-Dünner, Martin Jaggi, Markus Püschel

A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator.

Sampling Acquisition Functions for Batch Bayesian Optimization

no code implementations22 Mar 2019 Alessandro De Palma, Celestine Mendler-Dünner, Thomas Parnell, Andreea Anghel, Haralampos Pozidis

We present Acquisition Thompson Sampling (ATS), a novel technique for batch Bayesian Optimization (BO) based on the idea of sampling multiple acquisition functions from a stochastic process.

Cannot find the paper you are looking for? You can Submit a new open access paper.