Search Results for author: Ira Globus-Harris

Found 7 papers, 2 papers with code

Diversified Ensembling: An Experiment in Crowdsourced Machine Learning

no code implementations16 Feb 2024 Ira Globus-Harris, Declan Harrison, Michael Kearns, Pietro Perona, Aaron Roth

There, unlike in classical crowdsourced ML, participants deliberately specialize their efforts by working on subproblems, such as demographic subgroups in the service of fairness.

Fairness Holdout Set +1

Multicalibration as Boosting for Regression

1 code implementation31 Jan 2023 Ira Globus-Harris, Declan Harrison, Michael Kearns, Aaron Roth, Jessica Sorrell

Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class H that makes use only of a standard squared error regression oracle for H. We give a weak learning assumption on H that ensures convergence to Bayes optimality without the need to make any realizability assumptions -- giving us an agnostic boosting algorithm for regression.

regression

Multicalibrated Regression for Downstream Fairness

no code implementations15 Sep 2022 Ira Globus-Harris, Varun Gupta, Christopher Jung, Michael Kearns, Jamie Morgenstern, Aaron Roth

We show how to take a regression function $\hat{f}$ that is appropriately ``multicalibrated'' and efficiently post-process it into an approximately error minimizing classifier satisfying a large variety of fairness constraints.

Fairness regression

An Algorithmic Framework for Bias Bounties

no code implementations25 Jan 2022 Ira Globus-Harris, Michael Kearns, Aaron Roth

We propose and analyze an algorithmic framework for "bias bounties": events in which external participants are invited to propose improvements to a trained model, akin to bug bounty events in software and security.

Fairness

Non-parametric Differentially Private Confidence Intervals for the Median

1 code implementation18 Jun 2021 Joerg Drechsler, Ira Globus-Harris, Audra McMillan, Jayshree Sarathy, Adam Smith

Differential privacy is a restriction on data processing algorithms that provides strong confidentiality guarantees for individual records in the data.

valid

Lexicographically Fair Learning: Algorithms and Generalization

no code implementations16 Feb 2021 Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, Saeed Sharifi-Malvajerdi

We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short).

Fairness Generalization Bounds

Improved Differentially Private Analysis of Variance

no code implementations1 Mar 2019 Marika Swanberg, Ira Globus-Harris, Iris Griffith, Anna Ritz, Adam Groce, Andrew Bray

Hypothesis testing is one of the most common types of data analysis and forms the backbone of scientific research in many disciplines.

Two-sample testing

Cannot find the paper you are looking for? You can Submit a new open access paper.