Search Results for author: Yanda Chen

Found 10 papers, 4 papers with code

Social Orientation: A New Feature for Dialogue Analysis

no code implementations26 Feb 2024 Todd Morrill, Zhaoyuan Deng, Yanda Chen, Amith Ananthram, Colin Wayne Leach, Kathleen McKeown

Based on these results showing the utility of social orientation tags for dialogue outcome prediction tasks, we release our data sets, code, and models that are fine-tuned to predict social orientation tags on dialogue utterances.

Parallel Structures in Pre-training Data Yield In-Context Learning

no code implementations19 Feb 2024 Yanda Chen, Chen Zhao, Zhou Yu, Kathleen McKeown, He He

Pre-trained language models (LMs) are capable of in-context learning (ICL): they can adapt to a task with only a few examples given in the prompt without any parameter update.

In-Context Learning

Towards Consistent Natural-Language Explanations via Explanation-Consistency Finetuning

1 code implementation25 Jan 2024 Yanda Chen, Chandan Singh, Xiaodong Liu, Simiao Zuo, Bin Yu, He He, Jianfeng Gao

We propose explanation-consistency finetuning (EC-finetuning), a method that adapts LLMs to generate more consistent natural-language explanations on related examples.

Question Answering

Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations

no code implementations17 Jul 2023 Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, Kathleen McKeown

To answer these questions, we propose to evaluate $\textbf{counterfactual simulatability}$ of natural language explanations: whether an explanation can enable humans to precisely infer the model's outputs on diverse counterfactuals of the explained input.

counterfactual

In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models

no code implementations20 Dec 2022 Yukun Huang, Yanda Chen, Zhou Yu, Kathleen McKeown

We propose to combine in-context learning objectives with language modeling objectives to distill both the ability to read in-context examples and task knowledge to the smaller models.

Few-Shot Learning In-Context Learning +1

On the Relation between Sensitivity and Accuracy in In-context Learning

1 code implementation16 Sep 2022 Yanda Chen, Chen Zhao, Zhou Yu, Kathleen McKeown, He He

In-context learning (ICL) suffers from oversensitivity to the prompt, making it unreliable in real-world scenarios.

In-Context Learning Relation

Improved Synthetic Training for Reading Comprehension

no code implementations24 Oct 2020 Yanda Chen, Md Arafat Sultan, Vittorio Castelli

Automatically generated synthetic training examples have been shown to improve performance in machine reading comprehension (MRC).

Knowledge Distillation Machine Reading Comprehension

Detecting and Reducing Bias in a High Stakes Domain

1 code implementation IJCNLP 2019 Ruiqi Zhong, Yanda Chen, Desmond Patton, Charlotte Selous, Kathy Mckeown

Gang-involved youth in cities such as Chicago sometimes post on social media to express their aggression towards rival gangs and previous research has demonstrated that a deep learning approach can predict aggression and loss in posts.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.