Search Results for author: Krishnaram Kenthapadi

Found 21 papers, 7 papers with code

Multiaccurate Proxies for Downstream Fairness

no code implementations9 Jul 2021 Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth, Saeed Sharifi-Malvajerdi

We study the problem of training a model that must obey demographic fairness conditions when the sensitive features are not available at training time -- in other words, how can we train a model to be fair by race when we don't have data about race?

Fairness Generalization Bounds

On the Lack of Robust Interpretability of Neural Text Classifiers

no code implementations8 Jun 2021 Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi

With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models.

On Measuring the Diversity of Organizational Networks

1 code implementation14 May 2021 Zeinab S. Jalali, Krishnaram Kenthapadi, Sucheta Soundarajan

The interaction patterns of employees in social and professional networks play an important role in the success of employees and organizations as a whole.

Differentially Private Query Release Through Adaptive Projection

1 code implementation11 Mar 2021 Sergul Aydore, William Brown, Michael Kearns, Krishnaram Kenthapadi, Luca Melis, Aaron Roth, Ankit Siva

We propose, implement, and evaluate a new algorithm for releasing answers to very large numbers of statistical queries like $k$-way marginals, subject to differential privacy.

Towards Unbiased and Accurate Deferral to Multiple Experts

1 code implementation25 Feb 2021 Vijay Keswani, Matthew Lease, Krishnaram Kenthapadi

Machine learning models are often implemented in cohort with humans in the pipeline, with the model having an option to defer to a domain expert in cases where it has low confidence in its inference.


Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy

no code implementations11 Feb 2021 Dylan Slack, Nathalie Rauschmayr, Krishnaram Kenthapadi

Each region contains a specific type of model bug; for instance, a misclassification region for an MNIST classifier contains a style of skinny 6 that the model mistakes as a 1.

Defuse: Debugging Classifiers Through Distilling Unrestricted Adversarial Examples

no code implementations1 Jan 2021 Dylan Z Slack, Nathalie Rauschmayr, Krishnaram Kenthapadi

As a route to better discover and fix model bugs, we propose failure scenarios: regions on the data manifold that are incorrectly classified by a model.

Minimax Group Fairness: Algorithms and Experiments

no code implementations5 Nov 2020 Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth

We consider a recently introduced framework in which fairness is measured by worst-case outcomes across groups, rather than by the more standard differences between group outcomes.


LiFT: A Scalable Framework for Measuring Fairness in ML Applications

no code implementations14 Aug 2020 Sriram Vasudevan, Krishnaram Kenthapadi

Many internet applications are powered by machine learned models, which are usually trained on labeled datasets obtained through either implicit / explicit user feedback signals or human judgments.


Fairness-Aware Online Personalization

1 code implementation30 Jul 2020 G. Roshan Lal, Sahin Cem Geyik, Krishnaram Kenthapadi

For this purpose, we construct a stylized model for generating training data with potentially biased features as well as potentially biased labels and quantify the extent of bias that is learned by the model when the user responds in a biased manner as in many real-world scenarios.

Decision Making Fairness +1

Fair Bayesian Optimization

1 code implementation9 Jun 2020 Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau

Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.


Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

no code implementations30 Apr 2019 Sahin Cem Geyik, Stuart Ambler, Krishnaram Kenthapadi

We finally present the online A/B testing results from applying our framework towards representative ranking in LinkedIn Talent Search, and discuss the lessons learned in practice.

Fairness Recommendation Systems +1

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

no code implementations NAACL 2019 Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai

In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.

Word Embeddings

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting

2 code implementations27 Jan 2019 Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai

We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives.

Classification General Classification

PriPeARL: A Framework for Privacy-Preserving Analytics and Reporting at LinkedIn

no code implementations20 Sep 2018 Krishnaram Kenthapadi, Thanh T. L. Tran

Preserving privacy of users is a key requirement of web-scale analytics and reporting applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR.

Talent Search and Recommendation Systems at LinkedIn: Practical Challenges and Lessons Learned

no code implementations18 Sep 2018 Sahin Cem Geyik, Qi Guo, Bo Hu, Cagri Ozcaglar, Ketan Thakkar, Xianren Wu, Krishnaram Kenthapadi

LinkedIn Talent Solutions business contributes to around 65% of LinkedIn's annual revenue, and provides tools for job providers to reach out to potential candidates and for job seekers to find suitable career opportunities.

Information Retrieval Recommendation Systems

Bringing Salary Transparency to the World: Computing Robust Compensation Insights via LinkedIn Salary

no code implementations29 Mar 2017 Krishnaram Kenthapadi, Stuart Ambler, Liang Zhang, Deepak Agarwal

The recently launched LinkedIn Salary product has been designed with the goal of providing compensation insights to the world's professionals and thereby helping them optimize their earning potential.

Cannot find the paper you are looking for? You can Submit a new open access paper.