no code implementations • 19 Feb 2024 • Inna Wanyin Lin, ASHISH SHARMA, Christopher Michael Rytting, Adam S. Miner, Jina Suh, Tim Althoff
With IMBUE's additional just-in-time feedback, participants demonstrate 17% improvement in skill mastery, along with greater enhancements in self-efficacy (27% more) and reduction of negative emotions (16% more) compared to simulation-only.
1 code implementation • 7 Feb 2024 • Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi
We identify and formalize three possible ways to define and operationalize pluralism in AI systems: 1) Overton pluralistic models that present a spectrum of reasonable responses; 2) Steerably pluralistic models that can steer to reflect certain perspectives; and 3) Distributionally pluralistic models that are well-calibrated to a given population in distribution.
no code implementations • 3 Jun 2023 • Christopher Michael Rytting, Taylor Sorensen, Lisa Argyle, Ethan Busby, Nancy Fulda, Joshua Gubler, David Wingate
This provides exciting evidence that language models can serve as a critical advance in the coding of open-ended texts in a variety of applications.
1 code implementation • 22 Oct 2022 • Joshua Robinson, Christopher Michael Rytting, David Wingate
A more natural prompting approach is to present the question and answer options to the LLM jointly and have it output the symbol (e. g., "A") associated with its chosen answer option.
no code implementations • ACL 2022 • Taylor Sorensen, Joshua Robinson, Christopher Michael Rytting, Alexander Glenn Shaw, Kyle Jeffrey Rogers, Alexia Pauline Delorey, Mahmoud Khalil, Nancy Fulda, David Wingate
Pre-trained language models derive substantial linguistic and factual knowledge from the massive corpora on which they are trained, and prompt engineering seeks to align these models to specific tasks.
no code implementations • NeurIPS 2021 • Christopher Michael Rytting, David Wingate
Large natural language models (such as GPT-3 or T5) demonstrate impressive abilities across a range of general NLP tasks.