Search Results for author: Elena L. Glassman

Found 7 papers, 1 papers with code

Antagonistic AI

no code implementations12 Feb 2024 Alice Cai, Ian Arawjo, Elena L. Glassman

The vast majority of discourse around AI development assumes that subservient, "moral" models aligned with "human values" are universally beneficial -- in short, that good AI is sycophantic AI.

Supporting Sensemaking of Large Language Model Outputs at Scale

no code implementations24 Jan 2024 Katy Ilonka Gero, Chelse Swoopes, Ziwei Gu, Jonathan K. Kummerfeld, Elena L. Glassman

Large language models (LLMs) are capable of generating multiple responses to a single prompt, yet little effort has been expended to help end-users or system designers make use of this capability.

Language Modelling Large Language Model

Metric Elicitation; Moving from Theory to Practice

no code implementations7 Dec 2022 Safinah Ali, Sohini Upadhyay, Gaurush Hiranandani, Elena L. Glassman, Oluwasanmi Koyejo

Specifically, we create a web-based ME interface and conduct a user study that elicits users' preferred metrics in a binary classification setting.

Binary Classification Classification

Evaluating the Interpretability of Generative Models by Interactive Reconstruction

1 code implementation2 Feb 2021 Andrew Slavin Ross, Nina Chen, Elisa Zhao Hang, Elena L. Glassman, Finale Doshi-Velez

On synthetic datasets, we find performance on this task much more reliably differentiates entangled and disentangled models than baseline approaches.

Disentanglement

Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems

no code implementations22 Jan 2020 Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, Elena L. Glassman

The results of our experiments demonstrate that evaluations with proxy tasks did not predict the results of the evaluations with the actual decision-making tasks.

Decision Making Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.