Search Results for author: Gal Yona

Found 13 papers, 0 papers with code

Narrowing the Knowledge Evaluation Gap: Open-Domain Question Answering with Multi-Granularity Answers

no code implementations9 Jan 2024 Gal Yona, Roee Aharoni, Mor Geva

In this work, we propose GRANOLA QA, a novel evaluation setting where a predicted answer is evaluated in terms of accuracy and informativeness against a set of multi-granularity answers.

Informativeness Open-Domain Question Answering

Surfacing Biases in Large Language Models using Contrastive Input Decoding

no code implementations12 May 2023 Gal Yona, Or Honovich, Itay Laish, Roee Aharoni

We use CID to highlight context-specific biases that are hard to detect with standard decoding strategies and quantify the effect of different input perturbations.

Text Generation

Malign Overfitting: Interpolation Can Provably Preclude Invariance

no code implementations28 Nov 2022 Yoav Wald, Gal Yona, Uri Shalit, Yair Carmon

This suggests that the phenomenon of ``benign overfitting," in which models generalize well despite interpolating, might not favorably extend to settings in which robustness or fairness are desirable.

Fairness Out-of-Distribution Generalization

Useful Confidence Measures: Beyond the Max Score

no code implementations25 Oct 2022 Gal Yona, Amir Feder, Itay Laish

An important component in deploying machine learning (ML) in safety-critic applications is having a reliable measure of confidence in the ML model's predictions.

Active Learning with Label Comparisons

no code implementations10 Apr 2022 Gal Yona, Shay Moran, Gal Elidan, Amir Globerson

We show that there is a natural class where this approach is sub-optimal, and that there is a more comparison-efficient active learning scheme.

Active Learning

Decision-Making under Miscalibration

no code implementations18 Mar 2022 Guy N. Rothblum, Gal Yona

We formalize a natural (distribution-free) solution concept: given anticipated miscalibration of $\alpha$, we propose using the threshold $j$ that minimizes the worst-case regret over all $\alpha$-miscalibrated predictors, where the regret is the difference in clinical utility between using the threshold in question and using the optimal threshold in hindsight.

Binary Classification Decision Making +1

Revisiting Sanity Checks for Saliency Maps

no code implementations27 Oct 2021 Gal Yona, Daniel Greenfeld

They argue that some popular saliency methods should not be used for explainability purposes since the maps they produce are not sensitive to the underlying model that is to be explained.

Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via Disqualification

no code implementations2 Oct 2021 Guy N. Rothblum, Gal Yona

The notion of "too much" is quantified via a parameter $\gamma$ that serves as a vehicle for specifying acceptable tradeoffs between accuracy and fairness, in a way that is independent from the specific metrics used to quantify fairness and accuracy in a given task.

Fairness

Multi-group Agnostic PAC Learnability

no code implementations20 May 2021 Guy N Rothblum, Gal Yona

An agnostic PAC learning algorithm finds a predictor that is competitive with the best predictor in a benchmark hypothesis class, where competitiveness is measured with respect to a given loss function.

Fairness PAC learning

Outcome Indistinguishability

no code implementations26 Nov 2020 Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona

Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities" -- what is the probability of 5-year survival after cancer diagnosis?

Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data

no code implementations9 Oct 2019 Gal Yona, Amirata Ghorbani, James Zou

We propose Extended Shapley as a principled framework for this problem, and experiment empirically with how it can be used to address questions of ML accountability.

Preference-Informed Fairness

no code implementations3 Apr 2019 Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, Gal Yona

We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness.

Decision Making Fairness

Probably Approximately Metric-Fair Learning

no code implementations ICML 2018 Guy N. Rothblum, Gal Yona

We show that approximate metric-fairness {\em does} generalize, and leverage these generalization guarantees to construct polynomial-time PACF learning algorithms for the classes of linear and logistic predictors.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.