Search Results for author: Harikrishna Narasimhan

Found 33 papers, 10 papers with code

On the Relationship Between Binary Classification, Bipartite Ranking, and Binary Class Probability Estimation

no code implementations NeurIPS 2013 Harikrishna Narasimhan, Shivani Agarwal

It is known that a good binary CPE model can be used to obtain a good binary classification model (by thresholding at 0. 5), and also to obtain a good bipartite ranking model (by using the CPE model directly as a ranking model); it is also known that a binary classification model does not necessarily yield a CPE model.

Binary Classification Classification +1

Online and Stochastic Gradient Methods for Non-decomposable Loss Functions

no code implementations NeurIPS 2014 Purushottam Kar, Harikrishna Narasimhan, Prateek Jain

In this work we initiate a study of online learning techniques for such non-decomposable loss functions with an aim to enable incremental learning as well as design scalable solvers for batch problems.

Incremental Learning LEMMA

On the Statistical Consistency of Plug-in Classifiers for Non-decomposable Performance Measures

no code implementations NeurIPS 2014 Harikrishna Narasimhan, Rohit Vaish, Shivani Agarwal

In this work, we consider plug-in algorithms that learn a classifier by applying an empirically determined threshold to a suitable `estimate' of the class probability, and provide a general methodology to show consistency of these methods for any non-decomposable measure that can be expressed as a continuous function of true positive rate (TPR) and true negative rate (TNR), and for which the Bayes optimal classifier is the class probability function thresholded suitably.

Retrieval Text Retrieval

Consistent Classification Algorithms for Multi-class Non-Decomposable Performance Metrics

no code implementations1 Jan 2015 Harish G. Ramaswamy, Harikrishna Narasimhan, Shivani Agarwal

In this paper, we provide a unified framework for analysing a multi-class non-decomposable performance metric, where the problem of finding the optimal classifier for the performance metric is viewed as an optimization problem over the space of all confusion matrices achievable under the given distribution.

Classification General Classification +2

Surrogate Functions for Maximizing Precision at the Top

no code implementations26 May 2015 Purushottam Kar, Harikrishna Narasimhan, Prateek Jain

At the heart of our results is a family of truly upper bounding surrogates for prec@k. These surrogates are motivated in a principled manner and enjoy attractive properties such as consistency to prec@k under various natural margin/noise conditions.

Multi-Label Classification

Optimizing Non-decomposable Performance Measures: A Tale of Two Classes

no code implementations26 May 2015 Harikrishna Narasimhan, Purushottam Kar, Prateek Jain

Modern classification problems frequently present mild to severe label imbalance as well as specific requirements on classification characteristics, and require optimizing performance measures that are non-decomposable over the dataset, such as F-measure.

General Classification Vocal Bursts Valence Prediction

Learnability of Influence in Networks

no code implementations NeurIPS 2015 Harikrishna Narasimhan, David C. Parkes, Yaron Singer

We establish PAC learnability of influence functions for three common influence models, namely, the Linear Threshold (LT), Independent Cascade (IC) and Voter models, and present concrete sample complexity results in each case.

Online Optimization Methods for the Quantification Problem

no code implementations13 May 2016 Purushottam Kar, Shuai Li, Harikrishna Narasimhan, Sanjay Chawla, Fabrizio Sebastiani

The estimation of class prevalence, i. e., the fraction of a population that belongs to a certain class, is a very useful tool in data analytics and learning, and finds applications in many domains such as sentiment analysis, epidemiology, etc.

Epidemiology Sentiment Analysis

Support Vector Algorithms for Optimizing the Partial Area Under the ROC Curve

no code implementations13 May 2016 Harikrishna Narasimhan, Shivani Agarwal

Increasingly, however, in several applications, ranging from ranking to biometric screening to medicine, performance is measured not in terms of the full area under the ROC curve, but in terms of the \emph{partial} area under the ROC curve between two false positive rates.

Combinatorial Optimization

Metric-Optimized Example Weights

no code implementations ICLR 2019 Sen Zhao, Mahdi Milani Fard, Harikrishna Narasimhan, Maya Gupta

Real-world machine learning applications often have complex test metrics, and may have training and test data that are not identically distributed.

Pairwise Fairness for Ranking and Regression

1 code implementation12 Jun 2019 Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Serena Wang

We present pairwise fairness metrics for ranking models and regression models that form analogues of statistical fairness notions such as equal opportunity, equal accuracy, and statistical parity.

Fairness General Classification +1

Optimizing Generalized Rate Metrics through Game Equilibrium

no code implementations6 Sep 2019 Harikrishna Narasimhan, Andrew Cotter, Maya Gupta

We present a general framework for solving a large class of learning problems with non-linear functions of classification rates.

Fairness

Optimizing Generalized Rate Metrics with Three Players

2 code implementations NeurIPS 2019 Harikrishna Narasimhan, Andrew Cotter, Maya Gupta

We present a general framework for solving a large class of learning problems with non-linear functions of classification rates.

Fairness

On Making Stochastic Classifiers Deterministic

1 code implementation NeurIPS 2019 Andrew Cotter, Maya Gupta, Harikrishna Narasimhan

Stochastic classifiers arise in a number of machine learning problems, and have become especially prominent of late, as they often result from constrained optimization problems, e. g. for fairness, churn, or custom losses.

Fairness

Optimizing Black-box Metrics with Adaptive Surrogates

no code implementations ICML 2020 Qijia Jiang, Olaoluwa Adigun, Harikrishna Narasimhan, Mahdi Milani Fard, Maya Gupta

We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates.

Robust Optimization for Fairness with Noisy Protected Groups

1 code implementation NeurIPS 2020 Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Michael. I. Jordan

Second, we introduce two new approaches using robust optimization that, unlike the naive approach of only relying on $\hat{G}$, are guaranteed to satisfy fairness criteria on the true protected groups G while minimizing a training objective.

Fairness

Quadratic Metric Elicitation for Fairness and Beyond

1 code implementation3 Nov 2020 Gaurush Hiranandani, Jatin Mathur, Harikrishna Narasimhan, Oluwasanmi Koyejo

Metric elicitation is a recent framework for eliciting classification performance metrics that best reflect implicit user preferences based on the task and context.

Fairness

Consistent Plug-in Classifiers for Complex Objectives and Constraints

no code implementations NeurIPS 2020 Shiv Kumar Tavker, Harish Guruprasad Ramaswamy, Harikrishna Narasimhan

We present a statistically consistent algorithm for constrained classification problems where the objective (e. g. F-measure, G-mean) and the constraints (e. g. demographic parity, coverage) are defined by general functions of the confusion matrix.

Approximate Heavily-Constrained Learning with Lagrange Multiplier Models

no code implementations NeurIPS 2020 Harikrishna Narasimhan, Andrew Cotter, Yichen Zhou, Serena Wang, Wenshuo Guo

In machine learning applications such as ranking fairness or fairness over intersectional groups, one often encounters optimization problems with an extremely large number of constraints.

Fairness

Distilling Double Descent

no code implementations13 Feb 2021 Andrew Cotter, Aditya Krishna Menon, Harikrishna Narasimhan, Ankit Singh Rawat, Sashank J. Reddi, Yichen Zhou

Distillation is the technique of training a "student" model based on examples that are labeled by a separate "teacher" model, which itself is trained on a labeled dataset.

Churn Reduction via Distillation

no code implementations ICLR 2022 Heinrich Jiang, Harikrishna Narasimhan, Dara Bahri, Andrew Cotter, Afshin Rostamizadeh

In real-world systems, models are frequently updated as more data becomes available, and in addition to achieving high accuracy, the goal is to also maintain a low difference in predictions compared to the base model (i. e. predictive "churn").

Training Over-parameterized Models with Non-decomposable Objectives

no code implementations NeurIPS 2021 Harikrishna Narasimhan, Aditya Krishna Menon

Many modern machine learning applications come with complex and nuanced design goals such as minimizing the worst-case error, satisfying a given precision or recall target, or enforcing group-fairness constraints.

Fairness

Implicit Rate-Constrained Optimization of Non-decomposable Objectives

6 code implementations23 Jul 2021 Abhishek Kumar, Harikrishna Narasimhan, Andrew Cotter

We consider a popular family of constrained optimization problems arising in machine learning that involve optimizing a non-decomposable evaluation metric with a certain thresholded form, while constraining another metric of interest.

Robust Distillation for Worst-class Performance

no code implementations13 Jun 2022 Serena Wang, Harikrishna Narasimhan, Yichen Zhou, Sara Hooker, Michal Lukasik, Aditya Krishna Menon

We show empirically that our robust distillation techniques not only achieve better worst-class performance, but also lead to Pareto improvement in the tradeoff between overall performance and worst-class performance compared to other baseline methods.

Knowledge Distillation

Consistent Multiclass Algorithms for Complex Metrics and Constraints

1 code implementation18 Oct 2022 Harikrishna Narasimhan, Harish G. Ramaswamy, Shiv Kumar Tavker, Drona Khurana, Praneeth Netrapalli, Shivani Agarwal

We present consistent algorithms for multiclass learning with complex performance metrics and constraints, where the objective and constraints are defined by arbitrary functions of the confusion matrix.

Fairness

Plugin estimators for selective classification with out-of-distribution detection

no code implementations29 Jan 2023 Harikrishna Narasimhan, Aditya Krishna Menon, Wittawat Jitkrittum, Sanjiv Kumar

Recent work on selective classification with OOD detection (SCOD) has argued for the unified study of these problems; however, the formal underpinnings of this problem are still nascent, and existing techniques are heuristic in nature.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

When Does Confidence-Based Cascade Deferral Suffice?

no code implementations NeurIPS 2023 Wittawat Jitkrittum, Neha Gupta, Aditya Krishna Menon, Harikrishna Narasimhan, Ankit Singh Rawat, Sanjiv Kumar

Cascades are a classical strategy to enable inference cost to vary adaptively across samples, wherein a sequence of classifiers are invoked in turn.

Distributionally Robust Post-hoc Classifiers under Prior Shifts

1 code implementation16 Sep 2023 Jiaheng Wei, Harikrishna Narasimhan, Ehsan Amid, Wen-Sheng Chu, Yang Liu, Abhishek Kumar

We investigate the problem of training models that are robust to shifts caused by changes in the distribution of class-priors or group-priors.

Language Model Cascades: Token-level uncertainty and beyond

no code implementations15 Apr 2024 Neha Gupta, Harikrishna Narasimhan, Wittawat Jitkrittum, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar

While the principles underpinning cascading are well-studied for classification tasks - with deferral based on predicted class uncertainty favored theoretically and practically - a similar understanding is lacking for generative LM tasks.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.