Search Results for author: Harikrishna Narasimhan

Found 26 papers, 7 papers with code

Implicit Rate-Constrained Optimization of Non-decomposable Objectives

2 code implementations23 Jul 2021 Abhishek Kumar, Harikrishna Narasimhan, Andrew Cotter

We consider a popular family of constrained optimization problems arising in machine learning that involve optimizing a non-decomposable evaluation metric with a certain thresholded form, while constraining another metric of interest.

Training Over-parameterized Models with Non-decomposable Objectives

no code implementations9 Jul 2021 Harikrishna Narasimhan, Aditya Krishna Menon

Many modern machine learning applications come with complex and nuanced design goals such as minimizing the worst-case error, satisfying a given precision or recall target, or enforcing group-fairness constraints.


Churn Reduction via Distillation

no code implementations4 Jun 2021 Heinrich Jiang, Harikrishna Narasimhan, Dara Bahri, Andrew Cotter, Afshin Rostamizadeh

In real-world systems, models are frequently updated as more data becomes available, and in addition to achieving high accuracy, the goal is to also maintain a low difference in predictions compared to the base model (i. e. predictive ``churn'').

Distilling Double Descent

no code implementations13 Feb 2021 Andrew Cotter, Aditya Krishna Menon, Harikrishna Narasimhan, Ankit Singh Rawat, Sashank J. Reddi, Yichen Zhou

Distillation is the technique of training a "student" model based on examples that are labeled by a separate "teacher" model, which itself is trained on a labeled dataset.

Consistent Plug-in Classifiers for Complex Objectives and Constraints

no code implementations NeurIPS 2020 Shiv Kumar Tavker, Harish Guruprasad Ramaswamy, Harikrishna Narasimhan

We present a statistically consistent algorithm for constrained classification problems where the objective (e. g. F-measure, G-mean) and the constraints (e. g. demographic parity, coverage) are defined by general functions of the confusion matrix.

Approximate Heavily-Constrained Learning with Lagrange Multiplier Models

no code implementations NeurIPS 2020 Harikrishna Narasimhan, Andrew Cotter, Yichen Zhou, Serena Wang, Wenshuo Guo

In machine learning applications such as ranking fairness or fairness over intersectional groups, one often encounters optimization problems with an extremely large number of constraints.


Quadratic Metric Elicitation for Fairness and Beyond

no code implementations3 Nov 2020 Gaurush Hiranandani, Jatin Mathur, Harikrishna Narasimhan, Oluwasanmi Koyejo

Metric elicitation is a recent framework for eliciting performance metrics that best reflect implicit user preferences based on the application and context.


Robust Optimization for Fairness with Noisy Protected Groups

1 code implementation NeurIPS 2020 Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Michael. I. Jordan

Second, we introduce two new approaches using robust optimization that, unlike the naive approach of only relying on $\hat{G}$, are guaranteed to satisfy fairness criteria on the true protected groups G while minimizing a training objective.


Optimizing Black-box Metrics with Adaptive Surrogates

no code implementations ICML 2020 Qijia Jiang, Olaoluwa Adigun, Harikrishna Narasimhan, Mahdi Milani Fard, Maya Gupta

We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates.

On Making Stochastic Classifiers Deterministic

1 code implementation NeurIPS 2019 Andrew Cotter, Maya Gupta, Harikrishna Narasimhan

Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e. g. for fairness, churn, or custom losses.


Optimizing Generalized Rate Metrics with Three Players

1 code implementation NeurIPS 2019 Harikrishna Narasimhan, Andrew Cotter, Maya Gupta

We present a general framework for solving a large class of learning problems with non-linear functions of classification rates.


Optimizing Generalized Rate Metrics through Game Equilibrium

no code implementations6 Sep 2019 Harikrishna Narasimhan, Andrew Cotter, Maya Gupta

We present a general framework for solving a large class of learning problems with non-linear functions of classification rates.


Pairwise Fairness for Ranking and Regression

1 code implementation12 Jun 2019 Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Serena Wang

We present pairwise fairness metrics for ranking models and regression models that form analogues of statistical fairness notions such as equal opportunity, equal accuracy, and statistical parity.

Fairness General Classification

Metric-Optimized Example Weights

no code implementations ICLR 2019 Sen Zhao, Mahdi Milani Fard, Harikrishna Narasimhan, Maya Gupta

Real-world machine learning applications often have complex test metrics, and may have training and test data that are not identically distributed.

Optimal Auctions through Deep Learning

2 code implementations12 Jun 2017 Paul Dütting, Zhe Feng, Harikrishna Narasimhan, David C. Parkes, Sai Srivatsa Ravindranath

Designing an incentive compatible auction that maximizes expected revenue is an intricate task.

Generalization Bounds

Support Vector Algorithms for Optimizing the Partial Area Under the ROC Curve

no code implementations13 May 2016 Harikrishna Narasimhan, Shivani Agarwal

Increasingly, however, in several applications, ranging from ranking to biometric screening to medicine, performance is measured not in terms of the full area under the ROC curve, but in terms of the \emph{partial} area under the ROC curve between two false positive rates.

Combinatorial Optimization

Online Optimization Methods for the Quantification Problem

no code implementations13 May 2016 Purushottam Kar, Shuai Li, Harikrishna Narasimhan, Sanjay Chawla, Fabrizio Sebastiani

The estimation of class prevalence, i. e., the fraction of a population that belongs to a certain class, is a very useful tool in data analytics and learning, and finds applications in many domains such as sentiment analysis, epidemiology, etc.

Epidemiology Sentiment Analysis

Learnability of Influence in Networks

no code implementations NeurIPS 2015 Harikrishna Narasimhan, David C. Parkes, Yaron Singer

We establish PAC learnability of influence functions for three common influence models, namely, the Linear Threshold (LT), Independent Cascade (IC) and Voter models, and present concrete sample complexity results in each case.

Optimizing Non-decomposable Performance Measures: A Tale of Two Classes

no code implementations26 May 2015 Harikrishna Narasimhan, Purushottam Kar, Prateek Jain

Modern classification problems frequently present mild to severe label imbalance as well as specific requirements on classification characteristics, and require optimizing performance measures that are non-decomposable over the dataset, such as F-measure.

General Classification

Surrogate Functions for Maximizing Precision at the Top

no code implementations26 May 2015 Purushottam Kar, Harikrishna Narasimhan, Prateek Jain

At the heart of our results is a family of truly upper bounding surrogates for prec@k. These surrogates are motivated in a principled manner and enjoy attractive properties such as consistency to prec@k under various natural margin/noise conditions.

Multi-Label Classification

Consistent Classification Algorithms for Multi-class Non-Decomposable Performance Metrics

no code implementations1 Jan 2015 Harish G. Ramaswamy, Harikrishna Narasimhan, Shivani Agarwal

In this paper, we provide a unified framework for analysing a multi-class non-decomposable performance metric, where the problem of finding the optimal classifier for the performance metric is viewed as an optimization problem over the space of all confusion matrices achievable under the given distribution.

Classification General Classification +1

On the Statistical Consistency of Plug-in Classifiers for Non-decomposable Performance Measures

no code implementations NeurIPS 2014 Harikrishna Narasimhan, Rohit Vaish, Shivani Agarwal

In this work, we consider plug-in algorithms that learn a classifier by applying an empirically determined threshold to a suitable `estimate' of the class probability, and provide a general methodology to show consistency of these methods for any non-decomposable measure that can be expressed as a continuous function of true positive rate (TPR) and true negative rate (TNR), and for which the Bayes optimal classifier is the class probability function thresholded suitably.

Online and Stochastic Gradient Methods for Non-decomposable Loss Functions

no code implementations NeurIPS 2014 Purushottam Kar, Harikrishna Narasimhan, Prateek Jain

In this work we initiate a study of online learning techniques for such non-decomposable loss functions with an aim to enable incremental learning as well as design scalable solvers for batch problems.

Incremental Learning

On the Relationship Between Binary Classification, Bipartite Ranking, and Binary Class Probability Estimation

no code implementations NeurIPS 2013 Harikrishna Narasimhan, Shivani Agarwal

It is known that a good binary CPE model can be used to obtain a good binary classification model (by thresholding at 0. 5), and also to obtain a good bipartite ranking model (by using the CPE model directly as a ranking model); it is also known that a binary classification model does not necessarily yield a CPE model.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.