Search Results for author: A. Feder Cooper

Found 18 papers, 7 papers with code

Between Randomness and Arbitrariness: Some Lessons for Reliable Machine Learning at Scale

no code implementations13 Jun 2024 A. Feder Cooper

To develop rigorous knowledge about ML models -- and the systems in which they are embedded -- we need reliable measurements.

Memorization Uncertainty Quantification

CommonCanvas: Open Diffusion Models Trained on Creative-Commons Images

no code implementations CVPR 2024 Aaron Gokaslan, A. Feder Cooper, Jasmine Collins, Landan Seguin, Austin Jacobson, Mihir Patel, Jonathan Frankle, Cory Stephenson, Volodymyr Kuleshov

We then develop a data- and compute-efficient training recipe that requires as little as 3% of the LAION data (i. e. roughly 70 million examples) needed to train existing SD2 models but obtains the same quality.

Transfer Learning

Scalable Extraction of Training Data from (Production) Language Models

no code implementations28 Nov 2023 Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, Katherine Lee

This paper studies extractable memorization: training data that an adversary can efficiently extract by querying a machine learning model without prior knowledge of the training dataset.

Chatbot Memorization

CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images

1 code implementation25 Oct 2023 Aaron Gokaslan, A. Feder Cooper, Jasmine Collins, Landan Seguin, Austin Jacobson, Mihir Patel, Jonathan Frankle, Cory Stephenson, Volodymyr Kuleshov

This task presents two challenges: (1) high-resolution CC images lack the captions necessary to train text-to-image generative models; (2) CC images are relatively scarce.

Transfer Learning

Coordinating Distributed Example Orders for Provably Accelerated Training

1 code implementation NeurIPS 2023 A. Feder Cooper, Wentao Guo, Khiem Pham, Tiancheng Yuan, Charlie F. Ruan, Yucheng Lu, Christopher De Sa

Recent research on online Gradient Balancing (GraB) has revealed that there exist permutation-based example orderings for SGD that are guaranteed to outperform random reshuffling (RR).

Non-Determinism and the Lawlessness of Machine Learning Code

no code implementations23 Jun 2022 A. Feder Cooper, Jonathan Frankle, Christopher De Sa

In this paper, we clarify the overlap and differences between these two concepts, and show that the effects of non-determinism, and consequently its implications for the law, become clearer from the perspective of reasoning about ML outputs as distributions over possible outcomes.

Legal Reasoning

Repairing Regressors for Fair Binary Classification at Any Decision Threshold

no code implementations14 Mar 2022 Kweku Kwegyir-Aggrey, A. Feder Cooper, Jessica Dai, John Dickerson, Keegan Hines, Suresh Venkatasubramanian

We study the problem of post-processing a supervised machine-learned regressor to maximize fair binary classification at all decision thresholds.

Binary Classification Classification +1

Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning

no code implementations10 Feb 2022 A. Feder Cooper, Emanuel Moss, Benjamin Laufer, Helen Nissenbaum

In 1996, Accountability in a Computerized Society [95] issued a clarion call concerning the erosion of accountability in society due to the ubiquitous delegation of consequential functions to computerized systems.

BIG-bench Machine Learning Philosophy

Tecnologica cosa: Modeling Storyteller Personalities in Boccaccio's Decameron

no code implementations22 Sep 2021 A. Feder Cooper, Maria Antoniak, Christopher De Sa, Marilyn Migiel, David Mimno

We explore Boccaccio's Decameron to see how digital humanities tools can be used for tasks that have limited data in a language no longer in contemporary use: medieval Italian.

Hyperparameter Optimization Is Deceiving Us, and How to Stop It

1 code implementation NeurIPS 2021 A. Feder Cooper, Yucheng Lu, Jessica Zosa Forde, Christopher De Sa

Recent empirical work shows that inconsistent results based on choice of hyperparameter optimization (HPO) configuration are a widespread problem in ML research.

Hyperparameter Optimization

Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research

no code implementations1 Feb 2021 A. Feder Cooper, Ellen Abrams

Across machine learning (ML) sub-disciplines, researchers make explicit mathematical assumptions in order to facilitate proof-writing.

BIG-bench Machine Learning Fairness

Where Is the Normative Proof? Assumptions and Contradictions in ML Fairness Research

no code implementations20 Oct 2020 A. Feder Cooper

This is because, similar to how mathematical assumptions constrain applicability, normative assumptions also limit algorithm applicability to certain problem domains.

Fairness

Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML Systems

1 code implementation4 Jul 2020 A. Feder Cooper, Karen Levy, Christopher De Sa

Trade-offs between accuracy and efficiency pervade law, public health, and other non-computing domains, which have developed policies to guide how to balance the two in conditions of uncertainty.

Autonomous Vehicles Distributed Computing

Asymptotically Optimal Exact Minibatch Metropolis-Hastings

1 code implementation NeurIPS 2020 Ruqi Zhang, A. Feder Cooper, Christopher De Sa

Metropolis-Hastings (MH) is a commonly-used MCMC algorithm, but it can be intractable on large datasets due to requiring computations over the whole dataset.

regression

AMAGOLD: Amortized Metropolis Adjustment for Efficient Stochastic Gradient MCMC

1 code implementation29 Feb 2020 Ruqi Zhang, A. Feder Cooper, Christopher De Sa

This improves performance, but introduces bias that can cause SGHMC to converge to the wrong distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.