You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

2 code implementations • 23 Jul 2021 • Abhishek Kumar, Harikrishna Narasimhan, Andrew Cotter

We consider a popular family of constrained optimization problems arising in machine learning that involve optimizing a non-decomposable evaluation metric with a certain thresholded form, while constraining another metric of interest.

no code implementations • 9 Jul 2021 • Harikrishna Narasimhan, Aditya Krishna Menon

Many modern machine learning applications come with complex and nuanced design goals such as minimizing the worst-case error, satisfying a given precision or recall target, or enforcing group-fairness constraints.

no code implementations • 4 Jun 2021 • Heinrich Jiang, Harikrishna Narasimhan, Dara Bahri, Andrew Cotter, Afshin Rostamizadeh

In real-world systems, models are frequently updated as more data becomes available, and in addition to achieving high accuracy, the goal is to also maintain a low difference in predictions compared to the base model (i. e. predictive ``churn'').

1 code implementation • 18 Feb 2021 • Gaurush Hiranandani, Jatin Mathur, Harikrishna Narasimhan, Mahdi Milani Fard, Oluwasanmi Koyejo

We consider learning to optimize a classification metric defined by a black-box function of the confusion matrix.

no code implementations • 13 Feb 2021 • Andrew Cotter, Aditya Krishna Menon, Harikrishna Narasimhan, Ankit Singh Rawat, Sashank J. Reddi, Yichen Zhou

Distillation is the technique of training a "student" model based on examples that are labeled by a separate "teacher" model, which itself is trained on a labeled dataset.

no code implementations • NeurIPS 2020 • Shiv Kumar Tavker, Harish Guruprasad Ramaswamy, Harikrishna Narasimhan

We present a statistically consistent algorithm for constrained classification problems where the objective (e. g. F-measure, G-mean) and the constraints (e. g. demographic parity, coverage) are defined by general functions of the confusion matrix.

no code implementations • NeurIPS 2020 • Harikrishna Narasimhan, Andrew Cotter, Yichen Zhou, Serena Wang, Wenshuo Guo

In machine learning applications such as ranking fairness or fairness over intersectional groups, one often encounters optimization problems with an extremely large number of constraints.

no code implementations • 3 Nov 2020 • Gaurush Hiranandani, Jatin Mathur, Harikrishna Narasimhan, Oluwasanmi Koyejo

Metric elicitation is a recent framework for eliciting performance metrics that best reflect implicit user preferences based on the application and context.

no code implementations • NeurIPS 2020 • Gaurush Hiranandani, Harikrishna Narasimhan, Oluwasanmi Koyejo

What is a fair performance metric?

1 code implementation • NeurIPS 2020 • Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Michael. I. Jordan

Second, we introduce two new approaches using robust optimization that, unlike the naive approach of only relying on $\hat{G}$, are guaranteed to satisfy fairness criteria on the true protected groups G while minimizing a training objective.

no code implementations • ICML 2020 • Qijia Jiang, Olaoluwa Adigun, Harikrishna Narasimhan, Mahdi Milani Fard, Maya Gupta

We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates.

1 code implementation • NeurIPS 2019 • Andrew Cotter, Maya Gupta, Harikrishna Narasimhan

Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e. g. for fairness, churn, or custom losses.

1 code implementation • NeurIPS 2019 • Harikrishna Narasimhan, Andrew Cotter, Maya Gupta

We present a general framework for solving a large class of learning problems with non-linear functions of classification rates.

no code implementations • 6 Sep 2019 • Harikrishna Narasimhan, Andrew Cotter, Maya Gupta

We present a general framework for solving a large class of learning problems with non-linear functions of classification rates.

1 code implementation • 12 Jun 2019 • Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Serena Wang

We present pairwise fairness metrics for ranking models and regression models that form analogues of statistical fairness notions such as equal opportunity, equal accuracy, and statistical parity.

no code implementations • ICLR 2019 • Sen Zhao, Mahdi Milani Fard, Harikrishna Narasimhan, Maya Gupta

Real-world machine learning applications often have complex test metrics, and may have training and test data that are not identically distributed.

2 code implementations • 12 Jun 2017 • Paul Dütting, Zhe Feng, Harikrishna Narasimhan, David C. Parkes, Sai Srivatsa Ravindranath

Designing an incentive compatible auction that maximizes expected revenue is an intricate task.

no code implementations • 13 May 2016 • Harikrishna Narasimhan, Shivani Agarwal

Increasingly, however, in several applications, ranging from ranking to biometric screening to medicine, performance is measured not in terms of the full area under the ROC curve, but in terms of the \emph{partial} area under the ROC curve between two false positive rates.

no code implementations • 13 May 2016 • Purushottam Kar, Shuai Li, Harikrishna Narasimhan, Sanjay Chawla, Fabrizio Sebastiani

The estimation of class prevalence, i. e., the fraction of a population that belongs to a certain class, is a very useful tool in data analytics and learning, and finds applications in many domains such as sentiment analysis, epidemiology, etc.

no code implementations • NeurIPS 2015 • Harikrishna Narasimhan, David C. Parkes, Yaron Singer

We establish PAC learnability of influence functions for three common influence models, namely, the Linear Threshold (LT), Independent Cascade (IC) and Voter models, and present concrete sample complexity results in each case.

no code implementations • 26 May 2015 • Harikrishna Narasimhan, Purushottam Kar, Prateek Jain

Modern classification problems frequently present mild to severe label imbalance as well as specific requirements on classification characteristics, and require optimizing performance measures that are non-decomposable over the dataset, such as F-measure.

no code implementations • 26 May 2015 • Purushottam Kar, Harikrishna Narasimhan, Prateek Jain

At the heart of our results is a family of truly upper bounding surrogates for prec@k. These surrogates are motivated in a principled manner and enjoy attractive properties such as consistency to prec@k under various natural margin/noise conditions.

no code implementations • 1 Jan 2015 • Harish G. Ramaswamy, Harikrishna Narasimhan, Shivani Agarwal

In this paper, we provide a unified framework for analysing a multi-class non-decomposable performance metric, where the problem of finding the optimal classifier for the performance metric is viewed as an optimization problem over the space of all confusion matrices achievable under the given distribution.

no code implementations • NeurIPS 2014 • Harikrishna Narasimhan, Rohit Vaish, Shivani Agarwal

In this work, we consider plug-in algorithms that learn a classifier by applying an empirically determined threshold to a suitable `estimate' of the class probability, and provide a general methodology to show consistency of these methods for any non-decomposable measure that can be expressed as a continuous function of true positive rate (TPR) and true negative rate (TNR), and for which the Bayes optimal classifier is the class probability function thresholded suitably.

no code implementations • NeurIPS 2014 • Purushottam Kar, Harikrishna Narasimhan, Prateek Jain

In this work we initiate a study of online learning techniques for such non-decomposable loss functions with an aim to enable incremental learning as well as design scalable solvers for batch problems.

no code implementations • NeurIPS 2013 • Harikrishna Narasimhan, Shivani Agarwal

It is known that a good binary CPE model can be used to obtain a good binary classification model (by thresholding at 0. 5), and also to obtain a good bipartite ranking model (by using the CPE model directly as a ranking model); it is also known that a binary classification model does not necessarily yield a CPE model.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.