Search Results for author: Lingyu Gao

Found 9 papers, 5 papers with code

Ambiguity-Aware In-Context Learning with Large Language Models

no code implementations14 Sep 2023 Lingyu Gao, Aditi Chaudhary, Krishna Srinivasan, Kazuma Hashimoto, Karthik Raman, Michael Bendersky

In-context learning (ICL) i. e. showing LLMs only a few task-specific demonstrations has led to downstream gains with no task-specific fine-tuning required.

In-Context Learning Semantic Similarity +3

ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of Mind

no code implementations24 May 2023 Xiaomeng Ma, Lingyu Gao, Qihui Xu

In addition, our paper wants to raise awareness in evaluating the ToM in LLMs and we want to invite more discussion on how to design the prompts and tasks for ToM tasks that can better assess the LLMs' ability.

Multiple-choice Question Answering

The Benefits of Label-Description Training for Zero-Shot Text Classification

1 code implementation3 May 2023 Lingyu Gao, Debanjan Ghosh, Kevin Gimpel

Pretrained language models have improved zero-shot text classification by allowing the transfer of semantic knowledge from the training data in order to classify among specific label sets in downstream tasks.

domain classification text-classification +3

Evaluating Transformer Models and Human Behaviors on Chinese Character Naming

1 code implementation22 Mar 2023 Xiaomeng Ma, Lingyu Gao

These results suggested that the transformer models can well capture human's character naming behavior.

How do we get there? Evaluating transformer neural networks as cognitive models for English past tense inflection

1 code implementation17 Oct 2022 Xiaomeng Ma, Lingyu Gao

The different behaviors on the regulars and irregulars suggest that the models have some degree of symbolic learning on the regularity of the verbs.

Distractor Analysis and Selection for Multiple-Choice Cloze Questions for Second-Language Learners

no code implementations WS 2020 Lingyu Gao, Kevin Gimpel, Arnar Jensson

Simple features of the distractor and correct answer correlate with the annotations, though we find substantial benefit to additionally using large-scale pretrained models to measure the fit of the distractor in the context.

Multiple-choice

A Cross-Task Analysis of Text Span Representations

1 code implementation WS 2020 Shubham Toshniwal, Haoyue Shi, Bowen Shi, Lingyu Gao, Karen Livescu, Kevin Gimpel

Many natural language processing (NLP) tasks involve reasoning with textual spans, including question answering, entity recognition, and coreference resolution.

coreference-resolution Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.