1 code implementation • *SEM (NAACL) 2022 • Lingyu Gao, Debanjan Ghosh, Kevin Gimpel
We propose a type-controlled framework for inquisitive question generation.
no code implementations • 14 Sep 2023 • Lingyu Gao, Aditi Chaudhary, Krishna Srinivasan, Kazuma Hashimoto, Karthik Raman, Michael Bendersky
In-context learning (ICL) i. e. showing LLMs only a few task-specific demonstrations has led to downstream gains with no task-specific fine-tuning required.
no code implementations • 24 May 2023 • Xiaomeng Ma, Lingyu Gao, Qihui Xu
In addition, our paper wants to raise awareness in evaluating the ToM in LLMs and we want to invite more discussion on how to design the prompts and tasks for ToM tasks that can better assess the LLMs' ability.
1 code implementation • 3 May 2023 • Lingyu Gao, Debanjan Ghosh, Kevin Gimpel
Pretrained language models have improved zero-shot text classification by allowing the transfer of semantic knowledge from the training data in order to classify among specific label sets in downstream tasks.
1 code implementation • 22 Mar 2023 • Xiaomeng Ma, Lingyu Gao
These results suggested that the transformer models can well capture human's character naming behavior.
1 code implementation • 17 Oct 2022 • Xiaomeng Ma, Lingyu Gao
The different behaviors on the regulars and irregulars suggest that the models have some degree of symbolic learning on the regularity of the verbs.
no code implementations • 17 May 2022 • Lingyu Gao, Debanjan Ghosh, Kevin Gimpel
We propose a type-controlled framework for inquisitive question generation.
no code implementations • WS 2020 • Lingyu Gao, Kevin Gimpel, Arnar Jensson
Simple features of the distractor and correct answer correlate with the annotations, though we find substantial benefit to additionally using large-scale pretrained models to measure the fit of the distractor in the context.
1 code implementation • WS 2020 • Shubham Toshniwal, Haoyue Shi, Bowen Shi, Lingyu Gao, Karen Livescu, Kevin Gimpel
Many natural language processing (NLP) tasks involve reasoning with textual spans, including question answering, entity recognition, and coreference resolution.