Search Results for author: Yanan Cao

Found 20 papers, 8 papers with code

CLIO: Role-interactive Multi-event Head Attention Network for Document-level Event Extraction

no code implementations COLING 2022 Yubing Ren, Yanan Cao, Fang Fang, Ping Guo, Zheng Lin, Wei Ma, Yi Liu

Transforming the large amounts of unstructured text on the Internet into structured event knowledge is a critical, yet unsolved goal of NLP, especially when addressing document-level text.

Document-level Event Extraction Event Extraction

Slot Dependency Modeling for Zero-Shot Cross-Domain Dialogue State Tracking

no code implementations COLING 2022 Qingyue Wang, Yanan Cao, Piji Li, Yanhe Fu, Zheng Lin, Li Guo

Zero-shot learning for Dialogue State Tracking (DST) focuses on generalizing to an unseen domain without the expense of collecting in domain data.

Dialogue State Tracking Zero-Shot Learning

SOM-NCSCM : An Efficient Neural Chinese Sentence Compression Model Enhanced with Self-Organizing Map

no code implementations EMNLP 2021 Kangli Zi, Shi Wang, Yu Liu, Jicun Li, Yanan Cao, Cungen Cao

Sentence Compression (SC), which aims to shorten sentences while retaining important words that express the essential meanings, has been studied for many years in many languages, especially in English.

Question Answering Sentence Compression

TEBNER: Domain Specific Named Entity Recognition with Type Expanded Boundary-aware Network

no code implementations EMNLP 2021 Zheng Fang, Yanan Cao, Tai Li, Ruipeng Jia, Fang Fang, Yanmin Shang, Yuhai Lu

To alleviate label scarcity in Named Entity Recognition (NER) task, distantly supervised NER methods are widely applied to automatically label data and identify entities.

named-entity-recognition Named Entity Recognition +1

Neural Extractive Summarization with Hierarchical Attentive Heterogeneous Graph Network

no code implementations EMNLP 2020 Ruipeng Jia, Yanan Cao, Hengzhu Tang, Fang Fang, Cong Cao, Shi Wang

Sentence-level extractive text summarization is substantially a node classification task of network mining, adhering to the informative components and concise representations.

Extractive Summarization Extractive Text Summarization +1

A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

1 code implementation11 Oct 2022 Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance.

Natural Language Understanding

Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA

1 code implementation10 Oct 2022 Qingyi Si, Fandong Meng, Mingyu Zheng, Zheng Lin, Yuanxin Liu, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

To overcome this limitation, we propose a new dataset that considers varying types of shortcuts by constructing different distribution shifts in multiple OOD test sets.

Question Answering Visual Question Answering (VQA)

Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning

1 code implementation10 Oct 2022 Qingyi Si, Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

However, these models reveal a trade-off that the improvements on OOD data severely sacrifice the performance on the in-distribution (ID) data (which is dominated by the biased samples).

Contrastive Learning Question Answering +1

Neural Label Search for Zero-Shot Multi-Lingual Extractive Summarization

no code implementations ACL 2022 Ruipeng Jia, Xingxing Zhang, Yanan Cao, Shi Wang, Zheng Lin, Furu Wei

In zero-shot multilingual extractive text summarization, a model is typically trained on English summarization dataset and then applied on summarization datasets of other languages.

Extractive Summarization Extractive Text Summarization

Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training

1 code implementation NAACL 2022 Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

Firstly, we discover that the success of magnitude pruning can be attributed to the preserved pre-training performance, which correlates with the downstream transferability.

Transfer Learning

Is There More Pattern in Knowledge Graph? Exploring Proximity Pattern for Knowledge Graph Embedding

no code implementations2 Oct 2021 Ren Li, Yanan Cao, Qiannan Zhu, Xiaoxue Li, Fang Fang

Modeling of relation pattern is the core focus of previous Knowledge Graph Embedding works, which represents how one entity is related to another semantically by some explicit relation.

Knowledge Graph Completion Knowledge Graph Embedding

How Does Knowledge Graph Embedding Extrapolate to Unseen Data: A Semantic Evidence View

1 code implementation24 Sep 2021 Ren Li, Yanan Cao, Qiannan Zhu, Guanqun Bi, Fang Fang, Yi Liu, Qian Li

However, most existing KGE works focus on the design of delicate triple modeling function, which mainly tells us how to measure the plausibility of observed triples, but offers limited explanation of why the methods can extrapolate to unseen data, and what are the important factors to help KGE extrapolate.

Knowledge Graph Completion Knowledge Graph Embedding +1

Deep Differential Amplifier for Extractive Summarization

no code implementations ACL 2021 Ruipeng Jia, Yanan Cao, Fang Fang, Yuchen Zhou, Zheng Fang, Yanbing Liu, Shi Wang

In this paper, we conceptualize the single-document extractive summarization as a rebalance problem and present a deep differential amplifier framework.

Extractive Summarization imbalanced classification

Task-adaptive Neural Process for User Cold-Start Recommendation

1 code implementation26 Feb 2021 Xixun Lin, Jia Wu, Chuan Zhou, Shirui Pan, Yanan Cao, Bin Wang

In this paper, we develop a novel meta-learning recommender called task-adaptive neural process (TaNP).

Meta-Learning Recommendation Systems

Graph Geometry Interaction Learning

1 code implementation NeurIPS 2020 Shichao Zhu, Shirui Pan, Chuan Zhou, Jia Wu, Yanan Cao, Bin Wang

To utilize the strength of both Euclidean and hyperbolic geometries, we develop a novel Geometry Interaction Learning (GIL) method for graphs, a well-suited and efficient alternative for learning abundant geometric properties in graph.

Link Prediction Node Classification

HIN: Hierarchical Inference Network for Document-Level Relation Extraction

no code implementations28 Mar 2020 Hengzhu Tang, Yanan Cao, Zhen-Yu Zhang, Jiangxia Cao, Fang Fang, Shi Wang, Pengfei Yin

In this paper, we propose a Hierarchical Inference Network (HIN) to make full use of the abundant information from entity level, sentence level and document level.

Document-level Relation Extraction Translation

RLINK: Deep Reinforcement Learning for User Identity Linkage

no code implementations31 Oct 2019 Xiaoxue Li, Yanan Cao, Yanmin Shang, Yangxi Li, Yanbing Liu, Jianlong Tan

User identity linkage is a task of recognizing the identities of the same user across different social networks (SN).

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.