Search Results for author: Zeqiu Wu

Found 15 papers, 8 papers with code

Training Language Models to Generate Text with Citations via Fine-grained Rewards

no code implementations6 Feb 2024 Chengyu Huang, Zeqiu Wu, Yushi Hu, Wenya Wang

While recent Large Language Models (LLMs) have proven useful in answering user queries, they are prone to hallucination, and their responses often lack credibility due to missing references to reliable sources.

Hallucination Question Answering

Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection

2 code implementations17 Oct 2023 Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, Hannaneh Hajishirzi

Our framework trains a single arbitrary LM that adaptively retrieves passages on-demand, and generates and reflects on retrieved passages and its own generations using special tokens, called reflection tokens.

Fact Verification Response Generation +1

Does Collaborative Human-LM Dialogue Generation Help Information Extraction from Human Dialogues?

no code implementations13 Jul 2023 Bo-Ru Lu, Nikita Haduong, Chia-Hsuan Lee, Zeqiu Wu, Hao Cheng, Paul Koester, Jean Utke, Tao Yu, Noah A. Smith, Mari Ostendorf

The capabilities of pretrained language models have opened opportunities to explore new application areas, but applications involving human-human interaction are limited by the fact that most data is protected from public release for privacy reasons.

Dialogue Generation Dialogue State Tracking +1

Fine-Grained Human Feedback Gives Better Rewards for Language Model Training

no code implementations NeurIPS 2023 Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari Ostendorf, Hannaneh Hajishirzi

We introduce Fine-Grained RLHF, a framework that enables training and learning from reward functions that are fine-grained in two respects: (1) density, providing a reward after every segment (e. g., a sentence) is generated; and (2) incorporating multiple reward models associated with different feedback types (e. g., factual incorrectness, irrelevance, and information incompleteness).

Language Modelling Long Form Question Answering +2

CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning

no code implementations16 Dec 2021 Zeqiu Wu, Yi Luan, Hannah Rashkin, David Reitter, Hannaneh Hajishirzi, Mari Ostendorf, Gaurav Singh Tomar

Compared to standard retrieval tasks, passage retrieval for conversational question answering (CQA) poses new challenges in understanding the current user question, as each question needs to be interpreted within the dialogue context.

Conversational Question Answering Passage Retrieval +3

DIALKI: Knowledge Identification in Conversational Systems through Dialogue-Document Contextualization

1 code implementation EMNLP 2021 Zeqiu Wu, Bo-Ru Lu, Hannaneh Hajishirzi, Mari Ostendorf

Identifying relevant knowledge to be used in conversational systems that are grounded in long documents is critical to effective response generation.

Response Generation

Automatic Document Sketching: Generating Drafts from Analogous Texts

no code implementations Findings (ACL) 2021 Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Bill Dolan

The advent of large pre-trained language models has made it possible to make high-quality predictions on how to add or change a sentence in a document.

Reinforcement Learning (RL) Sentence +1

Extracting Summary Knowledge Graphs from Long Documents

1 code implementation19 Sep 2020 Zeqiu Wu, Rik Koncel-Kedziorski, Mari Ostendorf, Hannaneh Hajishirzi

Knowledge graphs capture entities and relations from long documents and can facilitate reasoning in many downstream applications.

Graph Learning Knowledge Graphs +1

A Controllable Model of Grounded Response Generation

1 code implementation1 May 2020 Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, Bill Dolan

Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process, often resulting in uninteresting responses.

Informativeness Response Generation

SetExpan: Corpus-Based Set Expansion via Context Feature Selection and Rank Ensemble

1 code implementation17 Oct 2019 Jiaming Shen, Zeqiu Wu, Dongming Lei, Jingbo Shang, Xiang Ren, Jiawei Han

In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features.

feature selection Question Answering

Indirect Supervision for Relation Extraction using Question-Answer Pairs

2 code implementations30 Oct 2017 Zeqiu Wu, Xiang Ren, Frank F. Xu, Ji Li, Jiawei Han

However, due to the incompleteness of knowledge bases and the context-agnostic labeling, the training data collected via distant supervision (DS) can be very noisy.

Question Answering Relation +1

CoType: Joint Extraction of Typed Entities and Relations with Knowledge Bases

2 code implementations27 Oct 2016 Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Tarek F. Abdelzaher, Jiawei Han

We propose a novel domain-independent framework, called CoType, that runs a data-driven text segmentation algorithm to extract entity mentions, and jointly embeds entity mentions, relation mentions, text features and type labels into two low-dimensional spaces (for entity and relation mentions respectively), where, in each space, objects whose types are close will also have similar representations.

Joint Entity and Relation Extraction Relation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.