Search Results for author: Yi Luan

Found 20 papers, 8 papers with code

Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?

no code implementations23 Feb 2023 Yang Chen, Hexiang Hu, Yi Luan, Haitian Sun, Soravit Changpinyo, Alan Ritter, Ming-Wei Chang

Our analysis shows that it is challenging for the state-of-the-art multi-modal pre-trained models to answer visual information seeking questions, but this capability is improved through fine-tuning on the automated InfoSeek dataset.

Common Sense Reasoning Question Answering +2

Promptagator: Few-shot Dense Retrieval From 8 Examples

no code implementations23 Sep 2022 Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, Ming-Wei Chang

To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data.

Information Retrieval Natural Questions +1

ASQA: Factoid Questions Meet Long-Form Answers

no code implementations12 Apr 2022 Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, Ming-Wei Chang

In contrast to existing long-form QA tasks (such as ELI5), ASQA admits a clear notion of correctness: a user faced with a good summary should be able to answer different interpretations of the original ambiguous question.

Question Answering

CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning

no code implementations16 Dec 2021 Zeqiu Wu, Yi Luan, Hannah Rashkin, David Reitter, Hannaneh Hajishirzi, Mari Ostendorf, Gaurav Singh Tomar

Compared to standard retrieval tasks, passage retrieval for conversational question answering (CQA) poses new challenges in understanding the current user question, as each question needs to be interpreted within the dialogue context.

Conversational Question Answering Passage Retrieval +3

Large Dual Encoders Are Generalizable Retrievers

2 code implementations15 Dec 2021 Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, Yinfei Yang

With multi-stage training, surprisingly, scaling up the model size brings significant improvement on a variety of retrieval tasks, especially for out-of-domain generalization.

Domain Generalization Retrieval +1

Sparse, Dense, and Attentional Representations for Text Retrieval

1 code implementation1 May 2020 Yi Luan, Jacob Eisenstein, Kristina Toutanova, Michael Collins

Dual encoders perform retrieval by encoding documents and queries into dense lowdimensional vectors, scoring each document by its inner product with the query.

Open-Domain Question Answering Retrieval +1

Contextualized Representations Using Textual Encyclopedic Knowledge

no code implementations24 Apr 2020 Mandar Joshi, Kenton Lee, Yi Luan, Kristina Toutanova

We present a method to represent input texts by contextualizing them jointly with dynamically retrieved textual encyclopedic background knowledge from multiple documents.

Language Modelling Reading Comprehension +1

Entity, Relation, and Event Extraction with Contextualized Span Representations

2 code implementations IJCNLP 2019 David Wadden, Ulme Wennberg, Yi Luan, Hannaneh Hajishirzi

We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction.

Event Extraction Joint Entity and Relation Extraction +3

PaperRobot: Incremental Draft Generation of Scientific Ideas

2 code implementations ACL 2019 Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knight, Heng Ji, Mohit Bansal, Yi Luan

We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper.

Graph Attention Knowledge Graphs +4

A General Framework for Information Extraction using Dynamic Span Graphs

3 code implementations NAACL 2019 Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, Hannaneh Hajishirzi

We introduce a general framework for several information extraction tasks that share span representations using dynamically constructed span graphs.

 Ranked #1 on Relation Extraction on ACE 2004 (Cross Sentence metric)

Joint Entity and Relation Extraction Named Entity Recognition (NER)

Text Generation from Knowledge Graphs with Graph Transformers

3 code implementations NAACL 2019 Rik Koncel-Kedziorski, Dhanush Bekal, Yi Luan, Mirella Lapata, Hannaneh Hajishirzi

Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (document plan), but these representations are prohibitively expensive to manually produce.

Dialogue Generation KG-to-Text Generation +1

Monolingual sentence matching for text simplification

no code implementations19 Sep 2018 Yonghui Huang, Yunhui Li, Yi Luan

This work improves monolingual sentence alignment for text simplification, specifically for text in standard and simple Wikipedia.

Text Simplification

Scientific Relation Extraction with Selectively Incorporated Concept Embeddings

no code implementations26 Aug 2018 Yi Luan, Mari Ostendorf, Hannaneh Hajishirzi

This paper describes our submission for the SemEval 2018 Task 7 shared task on semantic relation extraction and classification in scientific papers.

Classification General Classification +1

Multi-Task Learning for Speaker-Role Adaptation in Neural Conversation Models

no code implementations IJCNLP 2017 Yi Luan, Chris Brockett, Bill Dolan, Jianfeng Gao, Michel Galley

Building a persona-based conversation agent is challenging owing to the lack of large amounts of speaker-specific conversation data for model training.

Multi-Task Learning

Scientific Information Extraction with Semi-supervised Neural Tagging

no code implementations EMNLP 2017 Yi Luan, Mari Ostendorf, Hannaneh Hajishirzi

This paper addresses the problem of extracting keyphrases from scientific articles and categorizing them as corresponding to a task, process, or material.

named-entity-recognition Named Entity Recognition +1

LSTM based Conversation Models

1 code implementation31 Mar 2016 Yi Luan, Yangfeng Ji, Mari Ostendorf

In this paper, we present a conversational model that incorporates both context and participant role for two-party conversations.

Language Modelling Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.