Search Results for author: Christopher Michael Rytting

Found 6 papers, 2 papers with code

IMBUE: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human-Language Model Interaction

no code implementations19 Feb 2024 Inna Wanyin Lin, ASHISH SHARMA, Christopher Michael Rytting, Adam S. Miner, Jina Suh, Tim Althoff

With IMBUE's additional just-in-time feedback, participants demonstrate 17% improvement in skill mastery, along with greater enhancements in self-efficacy (27% more) and reduction of negative emotions (16% more) compared to simulation-only.

Language Modelling Skill Mastery

A Roadmap to Pluralistic Alignment

1 code implementation7 Feb 2024 Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi

We identify and formalize three possible ways to define and operationalize pluralism in AI systems: 1) Overton pluralistic models that present a spectrum of reasonable responses; 2) Steerably pluralistic models that can steer to reflect certain perspectives; and 3) Distributionally pluralistic models that are well-calibrated to a given population in distribution.

Towards Coding Social Science Datasets with Language Models

no code implementations3 Jun 2023 Christopher Michael Rytting, Taylor Sorensen, Lisa Argyle, Ethan Busby, Nancy Fulda, Joshua Gubler, David Wingate

This provides exciting evidence that language models can serve as a critical advance in the coding of open-ended texts in a variety of applications.

Leveraging Large Language Models for Multiple Choice Question Answering

1 code implementation22 Oct 2022 Joshua Robinson, Christopher Michael Rytting, David Wingate

A more natural prompting approach is to present the question and answer options to the LLM jointly and have it output the symbol (e. g., "A") associated with its chosen answer option.

Answer Selection Multiple-choice +1

An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels

no code implementations ACL 2022 Taylor Sorensen, Joshua Robinson, Christopher Michael Rytting, Alexander Glenn Shaw, Kyle Jeffrey Rogers, Alexia Pauline Delorey, Mahmoud Khalil, Nancy Fulda, David Wingate

Pre-trained language models derive substantial linguistic and factual knowledge from the massive corpora on which they are trained, and prompt engineering seeks to align these models to specific tasks.

Prompt Engineering

Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning

no code implementations NeurIPS 2021 Christopher Michael Rytting, David Wingate

Large natural language models (such as GPT-3 or T5) demonstrate impressive abilities across a range of general NLP tasks.

Inductive Bias

Cannot find the paper you are looking for? You can Submit a new open access paper.