no code implementations • ICML 2020 • Maya Gupta, Erez Louidor, Oleksandr Mangylov, Nobu Morioka, Tamann Narayan, Sen Zhao
We propose new multi-input shape constraints across four intuitive categories: complements, diminishers, dominance, and unimodality constraints.
no code implementations • 2 Feb 2022 • Sen Zhao, Erez Louidor, Olexander Mangylov, Maya Gupta
We consider the problem of estimating a good maximizer of a black-box function given noisy examples.
no code implementations • 9 Feb 2021 • Taman Narayan, Serena Wang, Kevin Canini, Maya Gupta
We show that minimizing an expected pinball loss over a continuous distribution of quantiles is a good regularizer even when only predicting a specific quantile.
no code implementations • 9 Dec 2020 • Stephen Lindsly, Maya Gupta, Cooper Stansbury, Indika Rajapakse
However, memory B cells appear to be purposely selected earlier in the affinity maturation process and have lower affinity.
no code implementations • ICML 2020 • Dara Bahri, Heinrich Jiang, Maya Gupta
Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify.
1 code implementation • NeurIPS 2020 • Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Michael. I. Jordan
Second, we introduce two new approaches using robust optimization that, unlike the naive approach of only relying on $\hat{G}$, are guaranteed to satisfy fairness criteria on the true protected groups G while minimizing a training objective.
no code implementations • ICML 2020 • Qijia Jiang, Olaoluwa Adigun, Harikrishna Narasimhan, Mahdi Milani Fard, Maya Gupta
We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates.
1 code implementation • 31 Jan 2020 • Serena Wang, Maya Gupta
We demonstrate how easy it is for modern machine-learned systems to violate common deontological ethical principles and social norms such as "favor the less fortunate," and "do not penalize good attributes."
2 code implementations • NeurIPS 2019 • Harikrishna Narasimhan, Andrew Cotter, Maya Gupta
We present a general framework for solving a large class of learning problems with non-linear functions of classification rates.
1 code implementation • NeurIPS 2019 • Andrew Cotter, Maya Gupta, Harikrishna Narasimhan
Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e. g. for fairness, churn, or custom losses.
no code implementations • 6 Sep 2019 • Harikrishna Narasimhan, Andrew Cotter, Maya Gupta
We present a general framework for solving a large class of learning problems with non-linear functions of classification rates.
1 code implementation • 12 Jun 2019 • Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Serena Wang
We present pairwise fairness metrics for ranking models and regression models that form analogues of statistical fairness notions such as equal opportunity, equal accuracy, and statistical parity.
no code implementations • 31 May 2019 • Heinrich Jiang, Maya Gupta
We present a new active sampling method we call min-margin which trains multiple learners on bootstrap samples and then chooses the examples to label based on the candidates' minimum margin amongst the bootstrapped models.
no code implementations • NeurIPS 2018 • Maya Gupta, Dara Bahri, Andrew Cotter, Kevin Canini
We investigate machine learning models that can provide diminishing returns and accelerating returns guarantees to capture prior knowledge or policies about how outputs should depend on inputs.
1 code implementation • 11 Sep 2018 • Andrew Cotter, Heinrich Jiang, Serena Wang, Taman Narayan, Maya Gupta, Seungil You, Karthik Sridharan
This new formulation leads to an algorithm that produces a stochastic classifier by playing a two-player non-zero-sum game solving for what we call a semi-coarse correlated equilibrium, which in turn corresponds to an approximately optimal and feasible solution to the constrained optimization problem.
no code implementations • ICML 2018 • Andrew Cotter, Mahdi Milani Fard, Seungil You, Maya Gupta, Jeff Bilmes
We introduce the problem of grouping a finite ground set into blocks where each block is a subset of the ground set and where: (i) the blocks are individually highly valued by a submodular function (both robustly and in the average case) while satisfying block-specific matroid constraints; and (ii) block scores interact where blocks are jointly scored highly, thus making the blocks mutually non-redundant.
1 code implementation • 29 Jun 2018 • Andrew Cotter, Maya Gupta, Heinrich Jiang, Nathan Srebro, Karthik Sridharan, Serena Wang, Blake Woodworth, Seungil You
Classifiers can be trained with data-dependent constraints to satisfy fairness goals, reduce churn, achieve a targeted false positive rate, or other policy goals.
no code implementations • 28 Jun 2018 • Serena Wang, Maya Gupta, Seungil You
Given a classifier ensemble and a set of examples to be classified, many examples may be confidently and accurately classified after only a subset of the base models in the ensemble are evaluated.
no code implementations • 28 Jun 2018 • Maya Gupta, Andrew Cotter, Mahdi Milani Fard, Serena Wang
We consider the problem of improving fairness when one lacks access to a dataset labeled with protected groups, making it difficult to take advantage of strategies that can improve fairness but require protected group labels, either at training or runtime.
no code implementations • 31 May 2018 • Andrew Cotter, Maya Gupta, Heinrich Jiang, James Muller, Taman Narayan, Serena Wang, Tao Zhu
We propose learning flexible but interpretable functions that aggregate a variable-length set of permutation-invariant feature vectors to predict a label.
1 code implementation • NeurIPS 2018 • Heinrich Jiang, Been Kim, Melody Y. Guan, Maya Gupta
Knowing when a classifier's prediction can be trusted is useful in many applications and critical for safely using AI.
no code implementations • ICLR 2019 • Sen Zhao, Mahdi Milani Fard, Harikrishna Narasimhan, Maya Gupta
Real-world machine learning applications often have complex test metrics, and may have training and test data that are not identically distributed.
no code implementations • NeurIPS 2017 • Seungil You, David Ding, Kevin Canini, Jan Pfeifer, Maya Gupta
We propose learning deep models that are monotonic with respect to a user-specified set of inputs by alternating layers of linear embeddings, ensembles of lattices, and calibrators (piecewise linear functions), with appropriate constraints for monotonicity, and jointly training the resulting network.
no code implementations • NeurIPS 2016 • Mahdi Milani Fard, Quentin Cormier, Kevin Canini, Maya Gupta
Practical applications of machine learning often involve successive training iterations with changes to features and training examples.
no code implementations • NeurIPS 2016 • Mahdi Milani Fard, Kevin Canini, Andrew Cotter, Jan Pfeifer, Maya Gupta
For many machine learning problems, there are some inputs that are known to be positively (or negatively) related to the output, and in such cases training the model to respect that monotonic relationship can provide regularization, and makes the model more interpretable.
no code implementations • NeurIPS 2016 • Gabriel Goh, Andrew Cotter, Maya Gupta, Michael Friedlander
The goal of minimizing misclassification error on a training set is often just one of several real-world goals that might be defined on different datasets.
no code implementations • 15 Dec 2015 • Andrew Cotter, Maya Gupta, Jan Pfeifer
Minimizing empirical risk subject to a set of constraints can be a useful strategy for learning restricted classes of functions, such as monotonic functions, submodular functions, classifiers that guarantee a certain class label for some subset of examples, etc.
no code implementations • 23 May 2015 • Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojtek Moczydlowski, Alex van Esbroeck
Real-world machine learning applications may require functions that are fast-to-evaluate and interpretable.
no code implementations • NeurIPS 2012 • Sergey Feldman, Maya Gupta, Bela Frigyik
We present a multi-task learning approach to jointly estimate the means of multiple independent data sets.
no code implementations • NeurIPS 2010 • Bela Frigyik, Maya Gupta, Yihua Chen
Although the Dirichlet distribution is widely used, the independence structure of its components limits its accuracy as a model.
no code implementations • NeurIPS 2009 • Eric Garcia, Maya Gupta
We present a new empirical risk minimization framework for approximating functions from training samples for low-dimensional regression applications where a lattice (look-up table) is stored and interpolated at run-time for an efficient hardware implementation.