You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 31 Mar 2022 • Moritz Hardt, Meena Jagadeesan, Celestine Mendler-Dünner

We introduce the notion of performative power, which measures the ability of a firm operating an algorithmic system, such as a digital content recommendation platform, to steer a population.

1 code implementation • NeurIPS 2021 • Frances Ding, Moritz Hardt, John Miller, Ludwig Schmidt

Our primary contribution is a suite of new datasets derived from US Census surveys that extend the existing data ecosystem for research on fair machine learning.

no code implementations • 19 Jul 2021 • Smitha Milli, Luca Belli, Moritz Hardt

Our results suggest that observational studies derived from user self-selection are a poor alternative to randomized experimentation on online platforms.

no code implementations • 24 Jun 2021 • Meena Jagadeesan, Celestine Mendler-Dünner, Moritz Hardt

When reasoning about strategic behavior in a machine learning context it is tempting to combine standard microfoundations of rational agents with the statistical decision theory underlying classification.

no code implementations • 10 Feb 2021 • Moritz Hardt, Benjamin Recht

This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions.

1 code implementation • 23 Sep 2020 • Chloe Ching-Yun Hsu, Celestine Mendler-Dünner, Moritz Hardt

We explain why standard design choices are problematic in these cases, and show that alternative choices of surrogate objectives and policy parameterizations can prevent the failure modes.

no code implementations • 21 Aug 2020 • Smitha Milli, Luca Belli, Moritz Hardt

Most recommendation engines today are based on predicting user engagement, e. g. predicting whether a user will click on an item or not.

1 code implementation • NeurIPS 2020 • Celestine Mendler-Dünner, Juan C. Perdomo, Tijana Zrnic, Moritz Hardt

In performative prediction, the choice of a model influences the distribution of future data, typically through actions taken based on the model's predictions.

1 code implementation • ICML 2020 • Esther Rolf, Max Simchowitz, Sarah Dean, Lydia T. Liu, Daniel Björkegren, Moritz Hardt, Joshua Blumenstock

Our theoretical results characterize the optimal strategies in this class, bound the Pareto errors due to inaccuracies in the scores, and show an equivalence between optimal strategies and a rich class of fairness-constrained profit-maximizing policies.

1 code implementation • ICML 2020 • Juan C. Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, Moritz Hardt

When predictions support decisions they may influence the outcome they aim to predict.

no code implementations • NeurIPS 2019 • Rebecca Roelofs, Vaishaal Shankar, Benjamin Recht, Sara Fridovich-Keil, Moritz Hardt, John Miller, Ludwig Schmidt

By systematically comparing the public ranking with the final ranking, we assess how much participants adapted to the holdout set over the course of a competition.

no code implementations • ICML 2020 • John Miller, Smitha Milli, Moritz Hardt

Moreover, we show a similar result holds for designing cost functions that satisfy the requirements of previous work.

3 code implementations • 29 Sep 2019 • Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A. Efros, Moritz Hardt

In this paper, we propose Test-Time Training, a general approach for improving the performance of predictive models when training and test data come from different distributions.

no code implementations • 25 Sep 2019 • Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A. Efros, Moritz Hardt

We introduce a general approach, called test-time training, for improving the performance of predictive models when test and training data come from different distributions.

1 code implementation • 2 Aug 2019 • Chloe Ching-Yun Hsu, Michaela Hardt, Moritz Hardt

Linear dynamical systems are a fundamental and powerful parametric model class.

no code implementations • 10 Jul 2019 • Michaela Hardt, Alvin Rajkomar, Gerardo Flores, Andrew Dai, Michael Howell, Greg Corrado, Claire Cui, Moritz Hardt

We consider explanations in a temporal setting where a stateful dynamical model produces a sequence of risk estimates given an input at each time step.

no code implementations • NeurIPS 2019 • Horia Mania, John Miller, Ludwig Schmidt, Moritz Hardt, Benjamin Recht

Excessive reuse of test data has become commonplace in today's machine learning workflows.

no code implementations • 24 May 2019 • Vitaly Feldman, Roy Frostig, Moritz Hardt

We show a new upper bound of $\tilde O(\max\{\sqrt{k\log(n)/(mn)}, k/n\})$ on the worst-case bias that any attack can achieve in a prediction problem with $m$ classes.

no code implementations • ICLR 2020 • Chiyuan Zhang, Samy Bengio, Moritz Hardt, Michael C. Mozer, Yoram Singer

We study the interplay between memorization and generalization of overparameterized networks in the extreme case of a single training example and an identity-mapping task.

no code implementations • 30 Jan 2019 • Tijana Zrnic, Moritz Hardt

The source of these pessimistic bounds is a model that permits arbitrary, possibly adversarial analysts that optimally use information to bias results.

4 code implementations • ICLR 2018 • Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar

Modern learning models are characterized by large hyperparameter spaces and long training times.

3 code implementations • NeurIPS 2018 • Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim

We find that reliance, solely, on visual assessment can be misleading.

no code implementations • 29 Aug 2018 • Lydia T. Liu, Max Simchowitz, Moritz Hardt

We show that under reasonable conditions, the deviation from satisfying group calibration is upper bounded by the excess risk of the learned score relative to the Bayes optimal score function.

no code implementations • 25 Aug 2018 • Smitha Milli, John Miller, Anca D. Dragan, Moritz Hardt

Consequential decision-making typically incentivizes individuals to behave strategically, tailoring their behavior to the specifics of the decision rule.

no code implementations • 13 Jul 2018 • Smitha Milli, Ludwig Schmidt, Anca D. Dragan, Moritz Hardt

We show through theory and experiment that gradient-based explanations of a model quickly reveal the model itself.

no code implementations • ICLR 2019 • John Miller, Moritz Hardt

Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks.

2 code implementations • ICML 2018 • Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, Moritz Hardt

Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time.

no code implementations • ICLR 2018 • Lisha Li, Kevin Jamieson, Afshin Rostamizadeh, Katya Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar

Modern machine learning models are characterized by large hyperparameter search spaces and prohibitively expensive training costs.

no code implementations • 8 Jun 2017 • Moritz Hardt

We revisit the \emph{leaderboard problem} introduced by Blum and Hardt (2015) in an effort to reduce overfitting in machine learning benchmarks.

no code implementations • NeurIPS 2017 • Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf

Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning.

no code implementations • 14 Nov 2016 • Moritz Hardt, Tengyu Ma

An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation.

8 code implementations • 10 Nov 2016 • Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals

Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance.

6 code implementations • NeurIPS 2016 • Moritz Hardt, Eric Price, Nathan Srebro

We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features.

no code implementations • 16 Sep 2016 • Moritz Hardt, Tengyu Ma, Benjamin Recht

We prove that stochastic gradient descent efficiently converges to the global optimizer of the maximum likelihood objective of an unknown linear time-invariant dynamical system from a sequence of noisy observations generated by the system.

no code implementations • NeurIPS 2015 • Ilias Diakonikolas, Moritz Hardt, Ludwig Schmidt

We investigate the problem of learning an unknown probability distribution over a discrete population from random samples.

no code implementations • 3 Sep 2015 • Moritz Hardt, Benjamin Recht, Yoram Singer

In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting.

1 code implementation • 23 Jun 2015 • Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, Mary Wootters

Jury designs a classifier, and Contestant receives an input to the classifier, which he may change at some cost.

1 code implementation • NeurIPS 2015 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth

We also formalize and address the general problem of data reuse in adaptive data analysis.

no code implementations • 16 Feb 2015 • Avrim Blum, Moritz Hardt

In this work, we introduce a notion of "leaderboard accuracy" tailored to the format of a competition.

no code implementations • 10 Nov 2014 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth

We show that, surprisingly, there is a way to estimate an exponential in $n$ number of expectations accurately even if the functions are chosen adaptively.

no code implementations • 6 Aug 2014 • Moritz Hardt, Jonathan Ullman

In particular, our result suggests that the perceived difficulty of preventing false discovery in today's collaborative research environment may be inherent.

no code implementations • 15 Jul 2014 • Moritz Hardt, Mary Wootters

We give the first algorithm for Matrix Completion whose running time and sample complexity is polynomial in the rank of the unknown target matrix, linear in the dimension of the matrix, and logarithmic in the condition number of the matrix.

no code implementations • 19 Apr 2014 • Moritz Hardt, Eric Price

Denoting by $\sigma^2$ the variance of the unknown mixture, we prove that $\Theta(\sigma^{12})$ samples are necessary and sufficient to estimate each parameter up to constant additive error when $d=1.$ Our upper bound extends to arbitrary dimension $d>1$ up to a (provably necessary) logarithmic loss in $d$ using a novel---yet simple---dimensionality reduction technique.

no code implementations • 10 Feb 2014 • Moritz Hardt, Raghu Meka, Prasad Raghavendra, Benjamin Weitz

Matrix Completion is the problem of recovering an unknown real-valued low-rank matrix from a subsample of its entries.

no code implementations • 3 Dec 2013 • Moritz Hardt

In addition, we contribute a new technique for controlling the coherence of intermediate solutions arising in iterative algorithms based on a smoothed analysis of the QR factorization.

no code implementations • NeurIPS 2014 • Moritz Hardt, Eric Price

The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis.

no code implementations • NeurIPS 2012 • Moritz Hardt, Katrina Ligett, Frank McSherry

We present a new algorithm for differentially private data release, based on a simple combination of the Exponential Mechanism with the Multiplicative Weights update rule.

no code implementations • 5 Nov 2012 • Moritz Hardt, Ankur Moitra

We give an algorithm that finds $T$ when it contains more than a $\frac{d}{n}$ fraction of the points.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.