Search Results for author: Moritz Hardt

Found 48 papers, 14 papers with code

Performative Power

no code implementations31 Mar 2022 Moritz Hardt, Meena Jagadeesan, Celestine Mendler-Dünner

We introduce the notion of performative power, which measures the ability of a firm operating an algorithmic system, such as a digital content recommendation platform, to steer a population.

Retiring Adult: New Datasets for Fair Machine Learning

1 code implementation NeurIPS 2021 Frances Ding, Moritz Hardt, John Miller, Ludwig Schmidt

Our primary contribution is a suite of new datasets derived from US Census surveys that extend the existing data ecosystem for research on fair machine learning.

Fairness

Causal Inference Struggles with Agency on Online Platforms

no code implementations19 Jul 2021 Smitha Milli, Luca Belli, Moritz Hardt

Our results suggest that observational studies derived from user self-selection are a poor alternative to randomized experimentation on online platforms.

Causal Inference

Alternative Microfoundations for Strategic Classification

no code implementations24 Jun 2021 Meena Jagadeesan, Celestine Mendler-Dünner, Moritz Hardt

When reasoning about strategic behavior in a machine learning context it is tempting to combine standard microfoundations of rational agents with the statistical decision theory underlying classification.

Classification

Patterns, predictions, and actions: A story about machine learning

no code implementations10 Feb 2021 Moritz Hardt, Benjamin Recht

This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions.

Causal Inference Decision Making +1

Revisiting Design Choices in Proximal Policy Optimization

1 code implementation23 Sep 2020 Chloe Ching-Yun Hsu, Celestine Mendler-Dünner, Moritz Hardt

We explain why standard design choices are problematic in these cases, and show that alternative choices of surrogate objectives and policy parameterizations can prevent the failure modes.

reinforcement-learning

From Optimizing Engagement to Measuring Value

no code implementations21 Aug 2020 Smitha Milli, Luca Belli, Moritz Hardt

Most recommendation engines today are based on predicting user engagement, e. g. predicting whether a user will click on an item or not.

Stochastic Optimization for Performative Prediction

1 code implementation NeurIPS 2020 Celestine Mendler-Dünner, Juan C. Perdomo, Tijana Zrnic, Moritz Hardt

In performative prediction, the choice of a model influences the distribution of future data, typically through actions taken based on the model's predictions.

Stochastic Optimization

Balancing Competing Objectives with Noisy Data: Score-Based Classifiers for Welfare-Aware Machine Learning

1 code implementation ICML 2020 Esther Rolf, Max Simchowitz, Sarah Dean, Lydia T. Liu, Daniel Björkegren, Moritz Hardt, Joshua Blumenstock

Our theoretical results characterize the optimal strategies in this class, bound the Pareto errors due to inaccuracies in the scores, and show an equivalence between optimal strategies and a rich class of fairness-constrained profit-maximizing policies.

Fairness

Performative Prediction

1 code implementation ICML 2020 Juan C. Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, Moritz Hardt

When predictions support decisions they may influence the outcome they aim to predict.

A Meta-Analysis of Overfitting in Machine Learning

no code implementations NeurIPS 2019 Rebecca Roelofs, Vaishaal Shankar, Benjamin Recht, Sara Fridovich-Keil, Moritz Hardt, John Miller, Ludwig Schmidt

By systematically comparing the public ranking with the final ranking, we assess how much participants adapted to the holdout set over the course of a competition.

Strategic Classification is Causal Modeling in Disguise

no code implementations ICML 2020 John Miller, Smitha Milli, Moritz Hardt

Moreover, we show a similar result holds for designing cost functions that satisfy the requirements of previous work.

Causal Inference Classification +2

Test-Time Training with Self-Supervision for Generalization under Distribution Shifts

3 code implementations29 Sep 2019 Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A. Efros, Moritz Hardt

In this paper, we propose Test-Time Training, a general approach for improving the performance of predictive models when training and test data come from different distributions.

CARLA MAP Leaderboard Image Classification +3

Test-Time Training for Out-of-Distribution Generalization

no code implementations25 Sep 2019 Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A. Efros, Moritz Hardt

We introduce a general approach, called test-time training, for improving the performance of predictive models when test and training data come from different distributions.

Image Classification Out-of-Distribution Generalization +1

Explaining an increase in predicted risk for clinical alerts

no code implementations10 Jul 2019 Michaela Hardt, Alvin Rajkomar, Gerardo Flores, Andrew Dai, Michael Howell, Greg Corrado, Claire Cui, Moritz Hardt

We consider explanations in a temporal setting where a stateful dynamical model produces a sequence of risk estimates given an input at each time step.

Model Similarity Mitigates Test Set Overuse

no code implementations NeurIPS 2019 Horia Mania, John Miller, Ludwig Schmidt, Moritz Hardt, Benjamin Recht

Excessive reuse of test data has become commonplace in today's machine learning workflows.

The advantages of multiple classes for reducing overfitting from test set reuse

no code implementations24 May 2019 Vitaly Feldman, Roy Frostig, Moritz Hardt

We show a new upper bound of $\tilde O(\max\{\sqrt{k\log(n)/(mn)}, k/n\})$ on the worst-case bias that any attack can achieve in a prediction problem with $m$ classes.

Identity Crisis: Memorization and Generalization under Extreme Overparameterization

no code implementations ICLR 2020 Chiyuan Zhang, Samy Bengio, Moritz Hardt, Michael C. Mozer, Yoram Singer

We study the interplay between memorization and generalization of overparameterized networks in the extreme case of a single training example and an identity-mapping task.

Natural Analysts in Adaptive Data Analysis

no code implementations30 Jan 2019 Tijana Zrnic, Moritz Hardt

The source of these pessimistic bounds is a model that permits arbitrary, possibly adversarial analysts that optimally use information to bias results.

Generalization Bounds

The implicit fairness criterion of unconstrained learning

no code implementations29 Aug 2018 Lydia T. Liu, Max Simchowitz, Moritz Hardt

We show that under reasonable conditions, the deviation from satisfying group calibration is upper bounded by the excess risk of the learned score relative to the Bayes optimal score function.

Fairness

The Social Cost of Strategic Classification

no code implementations25 Aug 2018 Smitha Milli, John Miller, Anca D. Dragan, Moritz Hardt

Consequential decision-making typically incentivizes individuals to behave strategically, tailoring their behavior to the specifics of the decision rule.

Classification Decision Making +2

Model Reconstruction from Model Explanations

no code implementations13 Jul 2018 Smitha Milli, Ludwig Schmidt, Anca D. Dragan, Moritz Hardt

We show through theory and experiment that gradient-based explanations of a model quickly reveal the model itself.

Stable Recurrent Models

no code implementations ICLR 2019 John Miller, Moritz Hardt

Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks.

Delayed Impact of Fair Machine Learning

2 code implementations ICML 2018 Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, Moritz Hardt

Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time.

Fairness

Massively Parallel Hyperparameter Tuning

no code implementations ICLR 2018 Lisha Li, Kevin Jamieson, Afshin Rostamizadeh, Katya Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar

Modern machine learning models are characterized by large hyperparameter search spaces and prohibitively expensive training costs.

Climbing a shaky ladder: Better adaptive risk estimation

no code implementations8 Jun 2017 Moritz Hardt

We revisit the \emph{leaderboard problem} introduced by Blum and Hardt (2015) in an effort to reduce overfitting in machine learning benchmarks.

Avoiding Discrimination through Causal Reasoning

no code implementations NeurIPS 2017 Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf

Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning.

Fairness Frame

Identity Matters in Deep Learning

no code implementations14 Nov 2016 Moritz Hardt, Tengyu Ma

An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation.

Understanding deep learning requires rethinking generalization

8 code implementations10 Nov 2016 Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals

Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance.

Image Classification

Equality of Opportunity in Supervised Learning

6 code implementations NeurIPS 2016 Moritz Hardt, Eric Price, Nathan Srebro

We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features.

General Classification

Gradient Descent Learns Linear Dynamical Systems

no code implementations16 Sep 2016 Moritz Hardt, Tengyu Ma, Benjamin Recht

We prove that stochastic gradient descent efficiently converges to the global optimizer of the maximum likelihood objective of an unknown linear time-invariant dynamical system from a sequence of noisy observations generated by the system.

Differentially Private Learning of Structured Discrete Distributions

no code implementations NeurIPS 2015 Ilias Diakonikolas, Moritz Hardt, Ludwig Schmidt

We investigate the problem of learning an unknown probability distribution over a discrete population from random samples.

Train faster, generalize better: Stability of stochastic gradient descent

no code implementations3 Sep 2015 Moritz Hardt, Benjamin Recht, Yoram Singer

In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting.

Strategic Classification

1 code implementation23 Jun 2015 Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, Mary Wootters

Jury designs a classifier, and Contestant receives an input to the classifier, which he may change at some cost.

Classification General Classification

The Ladder: A Reliable Leaderboard for Machine Learning Competitions

no code implementations16 Feb 2015 Avrim Blum, Moritz Hardt

In this work, we introduce a notion of "leaderboard accuracy" tailored to the format of a competition.

Preserving Statistical Validity in Adaptive Data Analysis

no code implementations10 Nov 2014 Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth

We show that, surprisingly, there is a way to estimate an exponential in $n$ number of expectations accurately even if the functions are chosen adaptively.

Two-sample testing

Preventing False Discovery in Interactive Data Analysis is Hard

no code implementations6 Aug 2014 Moritz Hardt, Jonathan Ullman

In particular, our result suggests that the perceived difficulty of preventing false discovery in today's collaborative research environment may be inherent.

Fast matrix completion without the condition number

no code implementations15 Jul 2014 Moritz Hardt, Mary Wootters

We give the first algorithm for Matrix Completion whose running time and sample complexity is polynomial in the rank of the unknown target matrix, linear in the dimension of the matrix, and logarithmic in the condition number of the matrix.

Matrix Completion

Tight bounds for learning a mixture of two gaussians

no code implementations19 Apr 2014 Moritz Hardt, Eric Price

Denoting by $\sigma^2$ the variance of the unknown mixture, we prove that $\Theta(\sigma^{12})$ samples are necessary and sufficient to estimate each parameter up to constant additive error when $d=1.$ Our upper bound extends to arbitrary dimension $d>1$ up to a (provably necessary) logarithmic loss in $d$ using a novel---yet simple---dimensionality reduction technique.

Dimensionality Reduction

Computational Limits for Matrix Completion

no code implementations10 Feb 2014 Moritz Hardt, Raghu Meka, Prasad Raghavendra, Benjamin Weitz

Matrix Completion is the problem of recovering an unknown real-valued low-rank matrix from a subsample of its entries.

Matrix Completion

Understanding Alternating Minimization for Matrix Completion

no code implementations3 Dec 2013 Moritz Hardt

In addition, we contribute a new technique for controlling the coherence of intermediate solutions arising in iterative algorithms based on a smoothed analysis of the QR factorization.

Matrix Completion

The Noisy Power Method: A Meta Algorithm with Applications

no code implementations NeurIPS 2014 Moritz Hardt, Eric Price

The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis.

Matrix Completion

A Simple and Practical Algorithm for Differentially Private Data Release

no code implementations NeurIPS 2012 Moritz Hardt, Katrina Ligett, Frank McSherry

We present a new algorithm for differentially private data release, based on a simple combination of the Exponential Mechanism with the Multiplicative Weights update rule.

Algorithms and Hardness for Robust Subspace Recovery

no code implementations5 Nov 2012 Moritz Hardt, Ankur Moitra

We give an algorithm that finds $T$ when it contains more than a $\frac{d}{n}$ fraction of the points.

Cannot find the paper you are looking for? You can Submit a new open access paper.