no code implementations • 16 Feb 2024 • Jiaheng Wei, Yuanshun Yao, Jean-Francois Ton, Hongyi Guo, Andrew Estornell, Yang Liu
In this work, we propose Factualness Evaluations via Weighting LLMs (FEWL), the first hallucination metric that is specifically designed for the scenario when gold-standard answers are absent.
no code implementations • 6 Dec 2021 • Andrew Estornell, Sanmay Das, Yang Liu, Yevgeniy Vorobeychik
These conditions are related to the the way in which the fair classifier remedies unfairness on the original unmanipulated data: fair classifiers which remedy unfairness by becoming more selective than their conventional counterparts are the ones that become less fair than their counterparts when agents are strategic.
no code implementations • 16 Dec 2020 • Andrew Estornell, Sanmay Das, Yevgeniy Vorobeychik
While this policy can, in general, be hard to compute because of the difficulty of identifying the set of agents who could benefit from lying given a complete set of reported types, we also present necessary and sufficient conditions under which it is tractable.
Multiagent Systems Computer Science and Game Theory
no code implementations • 14 Nov 2019 • Andrew Estornell, Sanmay Das, Yevgeniy Vorobeychik
Deception is a fundamental issue across a diverse array of settings, from cybersecurity, where decoys (e. g., honeypots) are an important tool, to politics that can feature politically motivated "leaks" and fake news about candidates. Typical considerations of deception view it as providing false information. However, just as important but less frequently studied is a more tacit form where information is strategically hidden or leaked. We consider the problem of how much an adversary can affect a principal's decision by "half-truths", that is, by masking or hiding bits of information, when the principal is oblivious to the presence of the adversary.