Search Results for author: Danqi Chen

Found 31 papers, 23 papers with code

Recovering Private Text in Federated Learning of Language Models

no code implementations17 May 2022 Samyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, Danqi Chen

In this paper, we present a novel attack method FILM for federated learning of language models -- for the first time, we show the feasibility of recovering text from large batch sizes of up to 128 sentences.

Federated Learning

Can Rationalization Improve Robustness?

1 code implementation25 Apr 2022 Howard Chen, Jacqueline He, Karthik Narasimhan, Danqi Chen

Our experiments reveal that the rationale models show the promise to improve robustness, while they struggle in certain scenarios--when the rationalizer is sensitive to positional bias or lexical choices of attack text.

Structured Pruning Learns Compact and Accurate Models

1 code implementation ACL 2022 Mengzhou Xia, Zexuan Zhong, Danqi Chen

The growing size of neural language models has led to increased attention in model compression.

Model Compression

Should You Mask 15% in Masked Language Modeling?

no code implementations16 Feb 2022 Alexander Wettig, Tianyu Gao, Zexuan Zhong, Danqi Chen

Masked language models conventionally use a masking rate of 15% due to the belief that more masking would provide insufficient context to learn good representations, and less masking would make training too expensive.

Language Modelling Masked Language Modeling

Ditch the Gold Standard: Re-evaluating Conversational Question Answering

2 code implementations ACL 2022 Huihan Li, Tianyu Gao, Manan Goenka, Danqi Chen

In this work, we conduct the first large-scale human evaluation of state-of-the-art conversational QA systems, where human evaluators converse with models and judge the correctness of their answers.

Question Rewriting

Single-dataset Experts for Multi-dataset Question Answering

1 code implementation EMNLP 2021 Dan Friedman, Ben Dodge, Danqi Chen

Many datasets have been created for training reading comprehension models, and a natural question is whether we can combine them to build models that (1) perform better on all of the training datasets and (2) generalize and transfer better to new datasets.

Question Answering Reading Comprehension

Simple Entity-Centric Questions Challenge Dense Retrievers

1 code implementation EMNLP 2021 Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, Danqi Chen

Open-domain question answering has exploded in popularity recently due to the success of dense retrieval models, which have surpassed sparse models using only a few supervised training examples.

Data Augmentation Open-Domain Question Answering

Non-Parametric Few-Shot Learning for Word Sense Disambiguation

1 code implementation NAACL 2021 Howard Chen, Mengzhou Xia, Danqi Chen

One significant challenge in supervised all-words WSD is to classify among senses for a majority of words that lie in the long-tail distribution.

Few-Shot Learning Word Sense Disambiguation

SimCSE: Simple Contrastive Learning of Sentence Embeddings

13 code implementations EMNLP 2021 Tianyu Gao, Xingcheng Yao, Danqi Chen

This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings.

Contrastive Learning Data Augmentation +4

Factual Probing Is [MASK]: Learning vs. Learning to Recall

2 code implementations NAACL 2021 Zexuan Zhong, Dan Friedman, Danqi Chen

Petroni et al. (2019) demonstrated that it is possible to retrieve world facts from a pre-trained language model by expressing them as cloze-style prompts and interpret the model's prediction accuracy as a lower bound on the amount of factual information it encodes.

Language Modelling

Making Pre-trained Language Models Better Few-shot Learners

5 code implementations ACL 2021 Tianyu Gao, Adam Fisch, Danqi Chen

We present LM-BFF--better few-shot fine-tuning of language models--a suite of simple and complementary techniques for fine-tuning language models on a small number of annotated examples.

Few-Shot Learning

Learning Dense Representations of Phrases at Scale

4 code implementations ACL 2021 Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, Danqi Chen

Open-domain question answering can be reformulated as a phrase retrieval problem, without the need for processing documents on-demand during inference (Seo et al., 2019).

Open-Domain Question Answering Question Generation +3

A Frustratingly Easy Approach for Entity and Relation Extraction

2 code implementations NAACL 2021 Zexuan Zhong, Danqi Chen

Our approach essentially builds on two independent encoders and merely uses the entity model to construct the input for the relation model.

Joint Entity and Relation Extraction Multi-Task Learning +2

Open-Domain Question Answering

no code implementations ACL 2020 Danqi Chen, Wen-tau Yih

This tutorial provides a comprehensive and coherent overview of cutting-edge research in open-domain question answering (QA), the task of answering questions using a large collection of documents of diversified topics.

Open-Domain Question Answering

Dense Passage Retrieval for Open-Domain Question Answering

12 code implementations EMNLP 2020 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih

Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method.

Open-Domain Question Answering Passage Retrieval

Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering

7 code implementations10 Nov 2019 Sewon Min, Danqi Chen, Luke Zettlemoyer, Hannaneh Hajishirzi

We introduce an approach for open-domain question answering (QA) that retrieves and reads a passage graph, where vertices are passages of text and edges represent relationships that are derived from an external knowledge base or co-occurrence in the same article.

Open-Domain Question Answering Reading Comprehension +1

MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension

1 code implementation WS 2019 Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, Danqi Chen

We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems.

Multi-Task Learning Question Answering +1

Reading Wikipedia to Answer Open-Domain Questions

9 code implementations ACL 2017 Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

Open-Domain Question Answering Reading Comprehension

A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task

3 code implementations ACL 2016 Danqi Chen, Jason Bolton, Christopher D. Manning

Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP.

Reading Comprehension

Reasoning With Neural Tensor Networks for Knowledge Base Completion

no code implementations NeurIPS 2013 Richard Socher, Danqi Chen, Christopher D. Manning, Andrew Ng

We assess the model by considering the problem of predicting additional true relations between entities given a partial knowledge base.

Knowledge Base Completion Tensor Networks

Cannot find the paper you are looking for? You can Submit a new open access paper.