Search Results for author: Maya Gupta

Found 31 papers, 8 papers with code

Multidimensional Shape Constraints

no code implementations ICML 2020 Maya Gupta, Erez Louidor, Oleksandr Mangylov, Nobu Morioka, Tamann Narayan, Sen Zhao

We propose new multi-input shape constraints across four intuitive categories: complements, diminishers, dominance, and unimodality constraints.

Additive models

Global Optimization Networks

no code implementations2 Feb 2022 Sen Zhao, Erez Louidor, Olexander Mangylov, Maya Gupta

We consider the problem of estimating a good maximizer of a black-box function given noisy examples.

GPR

Regularization Strategies for Quantile Regression

no code implementations9 Feb 2021 Taman Narayan, Serena Wang, Kevin Canini, Maya Gupta

We show that minimizing an expected pinball loss over a continuous distribution of quantiles is a good regularizer even when only predicting a specific quantile.

Fairness regression

Understanding Memory B Cell Selection

no code implementations9 Dec 2020 Stephen Lindsly, Maya Gupta, Cooper Stansbury, Indika Rajapakse

However, memory B cells appear to be purposely selected earlier in the affinity maturation process and have lower affinity.

Deep k-NN for Noisy Labels

no code implementations ICML 2020 Dara Bahri, Heinrich Jiang, Maya Gupta

Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify.

BIG-bench Machine Learning

Robust Optimization for Fairness with Noisy Protected Groups

1 code implementation NeurIPS 2020 Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Michael. I. Jordan

Second, we introduce two new approaches using robust optimization that, unlike the naive approach of only relying on $\hat{G}$, are guaranteed to satisfy fairness criteria on the true protected groups G while minimizing a training objective.

Fairness

Optimizing Black-box Metrics with Adaptive Surrogates

no code implementations ICML 2020 Qijia Jiang, Olaoluwa Adigun, Harikrishna Narasimhan, Mahdi Milani Fard, Maya Gupta

We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates.

Deontological Ethics By Monotonicity Shape Constraints

1 code implementation31 Jan 2020 Serena Wang, Maya Gupta

We demonstrate how easy it is for modern machine-learned systems to violate common deontological ethical principles and social norms such as "favor the less fortunate," and "do not penalize good attributes."

Ethics Fairness

Optimizing Generalized Rate Metrics with Three Players

2 code implementations NeurIPS 2019 Harikrishna Narasimhan, Andrew Cotter, Maya Gupta

We present a general framework for solving a large class of learning problems with non-linear functions of classification rates.

Fairness

On Making Stochastic Classifiers Deterministic

1 code implementation NeurIPS 2019 Andrew Cotter, Maya Gupta, Harikrishna Narasimhan

Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e. g. for fairness, churn, or custom losses.

Fairness

Optimizing Generalized Rate Metrics through Game Equilibrium

no code implementations6 Sep 2019 Harikrishna Narasimhan, Andrew Cotter, Maya Gupta

We present a general framework for solving a large class of learning problems with non-linear functions of classification rates.

Fairness

Pairwise Fairness for Ranking and Regression

1 code implementation12 Jun 2019 Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Serena Wang

We present pairwise fairness metrics for ranking models and regression models that form analogues of statistical fairness notions such as equal opportunity, equal accuracy, and statistical parity.

Fairness General Classification +1

Minimum-Margin Active Learning

no code implementations31 May 2019 Heinrich Jiang, Maya Gupta

We present a new active sampling method we call min-margin which trains multiple learners on bootstrap samples and then chooses the examples to label based on the candidates' minimum margin amongst the bootstrapped models.

Active Learning

Diminishing Returns Shape Constraints for Interpretability and Regularization

no code implementations NeurIPS 2018 Maya Gupta, Dara Bahri, Andrew Cotter, Kevin Canini

We investigate machine learning models that can provide diminishing returns and accelerating returns guarantees to capture prior knowledge or policies about how outputs should depend on inputs.

BIG-bench Machine Learning

Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals

1 code implementation11 Sep 2018 Andrew Cotter, Heinrich Jiang, Serena Wang, Taman Narayan, Maya Gupta, Seungil You, Karthik Sridharan

This new formulation leads to an algorithm that produces a stochastic classifier by playing a two-player non-zero-sum game solving for what we call a semi-coarse correlated equilibrium, which in turn corresponds to an approximately optimal and feasible solution to the constrained optimization problem.

Fairness

Constrained Interacting Submodular Groupings

no code implementations ICML 2018 Andrew Cotter, Mahdi Milani Fard, Seungil You, Maya Gupta, Jeff Bilmes

We introduce the problem of grouping a finite ground set into blocks where each block is a subset of the ground set and where: (i) the blocks are individually highly valued by a submodular function (both robustly and in the average case) while satisfying block-specific matroid constraints; and (ii) block scores interact where blocks are jointly scored highly, thus making the blocks mutually non-redundant.

Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints

1 code implementation29 Jun 2018 Andrew Cotter, Maya Gupta, Heinrich Jiang, Nathan Srebro, Karthik Sridharan, Serena Wang, Blake Woodworth, Seungil You

Classifiers can be trained with data-dependent constraints to satisfy fairness goals, reduce churn, achieve a targeted false positive rate, or other policy goals.

Fairness

Quit When You Can: Efficient Evaluation of Ensembles with Ordering Optimization

no code implementations28 Jun 2018 Serena Wang, Maya Gupta, Seungil You

Given a classifier ensemble and a set of examples to be classified, many examples may be confidently and accurately classified after only a subset of the base models in the ensemble are evaluated.

Combinatorial Optimization

Proxy Fairness

no code implementations28 Jun 2018 Maya Gupta, Andrew Cotter, Mahdi Milani Fard, Serena Wang

We consider the problem of improving fairness when one lacks access to a dataset labeled with protected groups, making it difficult to take advantage of strategies that can improve fairness but require protected group labels, either at training or runtime.

Fairness

Interpretable Set Functions

no code implementations31 May 2018 Andrew Cotter, Maya Gupta, Heinrich Jiang, James Muller, Taman Narayan, Serena Wang, Tao Zhu

We propose learning flexible but interpretable functions that aggregate a variable-length set of permutation-invariant feature vectors to predict a label.

To Trust Or Not To Trust A Classifier

1 code implementation NeurIPS 2018 Heinrich Jiang, Been Kim, Melody Y. Guan, Maya Gupta

Knowing when a classifier's prediction can be trusted is useful in many applications and critical for safely using AI.

Topological Data Analysis

Metric-Optimized Example Weights

no code implementations ICLR 2019 Sen Zhao, Mahdi Milani Fard, Harikrishna Narasimhan, Maya Gupta

Real-world machine learning applications often have complex test metrics, and may have training and test data that are not identically distributed.

Deep Lattice Networks and Partial Monotonic Functions

no code implementations NeurIPS 2017 Seungil You, David Ding, Kevin Canini, Jan Pfeifer, Maya Gupta

We propose learning deep models that are monotonic with respect to a user-specified set of inputs by alternating layers of linear embeddings, ensembles of lattices, and calibrators (piecewise linear functions), with appropriate constraints for monotonicity, and jointly training the resulting network.

General Classification regression

Launch and Iterate: Reducing Prediction Churn

no code implementations NeurIPS 2016 Mahdi Milani Fard, Quentin Cormier, Kevin Canini, Maya Gupta

Practical applications of machine learning often involve successive training iterations with changes to features and training examples.

Fast and Flexible Monotonic Functions with Ensembles of Lattices

no code implementations NeurIPS 2016 Mahdi Milani Fard, Kevin Canini, Andrew Cotter, Jan Pfeifer, Maya Gupta

For many machine learning problems, there are some inputs that are known to be positively (or negatively) related to the output, and in such cases training the model to respect that monotonic relationship can provide regularization, and makes the model more interpretable.

Satisfying Real-world Goals with Dataset Constraints

no code implementations NeurIPS 2016 Gabriel Goh, Andrew Cotter, Maya Gupta, Michael Friedlander

The goal of minimizing misclassification error on a training set is often just one of several real-world goals that might be defined on different datasets.

Fairness

A Light Touch for Heavily Constrained SGD

no code implementations15 Dec 2015 Andrew Cotter, Maya Gupta, Jan Pfeifer

Minimizing empirical risk subject to a set of constraints can be a useful strategy for learning restricted classes of functions, such as monotonic functions, submodular functions, classifiers that guarantee a certain class label for some subset of examples, etc.

Multi-Task Averaging

no code implementations NeurIPS 2012 Sergey Feldman, Maya Gupta, Bela Frigyik

We present a multi-task learning approach to jointly estimate the means of multiple independent data sets.

Multi-Task Learning

Shadow Dirichlet for Restricted Probability Modeling

no code implementations NeurIPS 2010 Bela Frigyik, Maya Gupta, Yihua Chen

Although the Dirichlet distribution is widely used, the independence structure of its components limits its accuracy as a model.

Lattice Regression

no code implementations NeurIPS 2009 Eric Garcia, Maya Gupta

We present a new empirical risk minimization framework for approximating functions from training samples for low-dimensional regression applications where a lattice (look-up table) is stored and interpolated at run-time for an efficient hardware implementation.

Management regression

Cannot find the paper you are looking for? You can Submit a new open access paper.