Search Results for author: Andrew Estornell

Found 4 papers, 0 papers with code

Measuring and Reducing LLM Hallucination without Gold-Standard Answers via Expertise-Weighting

no code implementations16 Feb 2024 Jiaheng Wei, Yuanshun Yao, Jean-Francois Ton, Hongyi Guo, Andrew Estornell, Yang Liu

In this work, we propose Factualness Evaluations via Weighting LLMs (FEWL), the first hallucination metric that is specifically designed for the scenario when gold-standard answers are absent.

Hallucination In-Context Learning

Unfairness Despite Awareness: Group-Fair Classification with Strategic Agents

no code implementations6 Dec 2021 Andrew Estornell, Sanmay Das, Yang Liu, Yevgeniy Vorobeychik

These conditions are related to the the way in which the fair classifier remedies unfairness on the original unmanipulated data: fair classifiers which remedy unfairness by becoming more selective than their conventional counterparts are the ones that become less fair than their counterparts when agents are strategic.

Classification Decision Making +1

Incentivizing Truthfulness Through Audits in Strategic Classification

no code implementations16 Dec 2020 Andrew Estornell, Sanmay Das, Yevgeniy Vorobeychik

While this policy can, in general, be hard to compute because of the difficulty of identifying the set of agents who could benefit from lying given a complete set of reported types, we also present necessary and sufficient conditions under which it is tractable.

Multiagent Systems Computer Science and Game Theory

Deception through Half-Truths

no code implementations14 Nov 2019 Andrew Estornell, Sanmay Das, Yevgeniy Vorobeychik

Deception is a fundamental issue across a diverse array of settings, from cybersecurity, where decoys (e. g., honeypots) are an important tool, to politics that can feature politically motivated "leaks" and fake news about candidates. Typical considerations of deception view it as providing false information. However, just as important but less frequently studied is a more tacit form where information is strategically hidden or leaked. We consider the problem of how much an adversary can affect a principal's decision by "half-truths", that is, by masking or hiding bits of information, when the principal is oblivious to the presence of the adversary.

Cannot find the paper you are looking for? You can Submit a new open access paper.