Search Results for author: Tianyu Gao

Found 17 papers, 14 papers with code

Few-shot Relation Extraction via Bayesian Meta-learning on Task Graphs

no code implementations ICML 2020 Meng Qu, Tianyu Gao, Louis-Pascal Xhonneux, Jian Tang

This paper studies few-shot relation extraction, which aims at predicting the relation for a pair of entities in a sentence by training with a few labeled examples in each relation.

Meta-Learning Relation Extraction

Recovering Private Text in Federated Learning of Language Models

1 code implementation17 May 2022 Samyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, Danqi Chen

In this paper, we present a novel attack method FILM for federated learning of language models -- for the first time, we show the feasibility of recovering text from large batch sizes of up to 128 sentences.

Federated Learning

Should You Mask 15% in Masked Language Modeling?

1 code implementation16 Feb 2022 Alexander Wettig, Tianyu Gao, Zexuan Zhong, Danqi Chen

Masked language models conventionally use a masking rate of 15% due to the belief that more masking would provide insufficient context to learn good representations, and less masking would make training too expensive.

Language Modelling Masked Language Modeling

Ditch the Gold Standard: Re-evaluating Conversational Question Answering

2 code implementations ACL 2022 Huihan Li, Tianyu Gao, Manan Goenka, Danqi Chen

In this work, we conduct the first large-scale human evaluation of state-of-the-art conversational QA systems, where human evaluators converse with models and judge the correctness of their answers.

Question Rewriting

Manual Evaluation Matters: Reviewing Test Protocols of Distantly Supervised Relation Extraction

1 code implementation Findings (ACL) 2021 Tianyu Gao, Xu Han, Keyue Qiu, Yuzhuo Bai, Zhiyu Xie, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Distantly supervised (DS) relation extraction (RE) has attracted much attention in the past few years as it can utilize large-scale auto-labeled data.

Relation Extraction

SimCSE: Simple Contrastive Learning of Sentence Embeddings

13 code implementations EMNLP 2021 Tianyu Gao, Xingcheng Yao, Danqi Chen

This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings.

Contrastive Learning Data Augmentation +4

Making Pre-trained Language Models Better Few-shot Learners

6 code implementations ACL 2021 Tianyu Gao, Adam Fisch, Danqi Chen

We present LM-BFF--better few-shot fine-tuning of language models--a suite of simple and complementary techniques for fine-tuning language models on a small number of annotated examples.

Few-Shot Learning

TADO: Time-varying Attention with Dual-Optimizer Model

1 code implementation8 Dec 2020 Yuexin Wu, Tianyu Gao, Sihao Wang, Zhongmin Xiong

As the first attempt in this field to address this problem, we propose a flexible dual-optimizer model to gain robustness from both regression loss and classification loss.

Recommendation Systems

Learning from Context or Names? An Empirical Study on Neural Relation Extraction

1 code implementation EMNLP 2020 Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, Jie zhou

We find that (i) while context is the main source to support the predictions, RE models also heavily rely on the information from entity mentions, most of which is type information, and (ii) existing datasets may leak shallow heuristics via entity mentions and thus contribute to the high performance on RE benchmarks.

Relation Extraction

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

1 code implementation5 Jul 2020 Meng Qu, Tianyu Gao, Louis-Pascal A. C. Xhonneux, Jian Tang

To more effectively generalize to new relations, in this paper we study the relationships between different relations and propose to leverage a global relation graph.

Meta-Learning Relation Extraction

Continual Relation Learning via Episodic Memory Activation and Reconsolidation

no code implementations ACL 2020 Xu Han, Yi Dai, Tianyu Gao, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Continual relation learning aims to continually train a model on new data to learn incessantly emerging novel relations while avoiding catastrophically forgetting old relations.

Continual Learning

FewRel 2.0: Towards More Challenging Few-Shot Relation Classification

1 code implementation IJCNLP 2019 Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

We present FewRel 2. 0, a more challenging task to investigate two aspects of few-shot relation classification models: (1) Can they adapt to a new domain with only a handful of instances?

Classification Domain Adaptation +2

OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction

1 code implementation IJCNLP 2019 Xu Han, Tianyu Gao, Yuan YAO, Demin Ye, Zhiyuan Liu, Maosong Sun

OpenNRE is an open-source and extensible toolkit that provides a unified framework to implement neural models for relation extraction (RE).

Information Retrieval Question Answering +1

Neural Snowball for Few-Shot Relation Learning

1 code implementation29 Aug 2019 Tianyu Gao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun

To address new relations with few-shot instances, we propose a novel bootstrapping approach, Neural Snowball, to learn new relations by transferring semantic knowledge about existing relations.

Knowledge Graphs Relation Extraction

Cannot find the paper you are looking for? You can Submit a new open access paper.