Search Results for author: Yansong Feng

Found 57 papers, 25 papers with code

Understanding Procedural Text using Interactive Entity Networks

no code implementations EMNLP 2020 Jizhi Tang, Yansong Feng, Dongyan Zhao

Recent efforts have made great progress to track multiple entities in a procedural text, but usually treat each entity separately and ignore the fact that there are often multiple entities interacting with each other during one process, some of which are even explicitly mentioned.

Reading Comprehension

Improve Discourse Dependency Parsing with Contextualized Representations

no code implementations4 May 2022 Yifei Zhou, Yansong Feng

Recent works show that discourse analysis benefits from modeling intra- and inter-sentential levels separately, where proper representations for text units of different granularities are desired to capture both the meaning of text units and their relations to the context.

Dependency Parsing

Entailment Graph Learning with Textual Entailment and Soft Transitivity

1 code implementation ACL 2022 Zhibin Chen, Yansong Feng, Dongyan Zhao

Typed entailment graphs try to learn the entailment relations between predicates from text and model them as edges between predicate nodes.

Graph Learning Natural Language Inference

Things not Written in Text: Exploring Spatial Commonsense from Visual Signals

1 code implementation ACL 2022 Xiao Liu, Da Yin, Yansong Feng, Dongyan Zhao

We probe PLMs and models with visual signals, including vision-language pretrained models and image synthesis models, on this benchmark, and find that image synthesis models are more capable of learning accurate and consistent spatial knowledge than other models.

Image Generation Natural Language Understanding +1

Extract, Integrate, Compete: Towards Verification Style Reading Comprehension

1 code implementation Findings (EMNLP) 2021 Chen Zhang, Yuxuan Lai, Yansong Feng, Dongyan Zhao

In this paper, we present a new verification style reading comprehension dataset named VGaokao from Chinese Language tests of Gaokao.

Reading Comprehension

Three Sentences Are All You Need: Local Path Enhanced Document Relation Extraction

1 code implementation ACL 2021 Quzhe Huang, Shengqi Zhu, Yansong Feng, Yuan Ye, Yuxuan Lai, Dongyan Zhao

Document-level Relation Extraction (RE) is a more challenging task than sentence RE as it often requires reasoning over multiple sentences.

Relation Extraction

Exploring Distantly-Labeled Rationales in Neural Network Models

no code implementations ACL 2021 Quzhe Huang, Shengqi Zhu, Yansong Feng, Dongyan Zhao

Recent studies strive to incorporate various human rationales into neural networks to improve model performance, but few pay attention to the quality of the rationales.

Why Machine Reading Comprehension Models Learn Shortcuts?

1 code implementation Findings (ACL) 2021 Yuxuan Lai, Chen Zhang, Yansong Feng, Quzhe Huang, Dongyan Zhao

A thorough empirical analysis shows that MRC models tend to learn shortcut questions earlier than challenging questions, and the high proportions of shortcut questions in training sets hinder models from exploring the sophisticated reasoning skills in the later stage of training.

Machine Reading Comprehension

Learning to Organize a Bag of Words into Sentences with Neural Networks: An Empirical Study

no code implementations NAACL 2021 Chongyang Tao, Shen Gao, Juntao Li, Yansong Feng, Dongyan Zhao, Rui Yan

Sequential information, a. k. a., orders, is assumed to be essential for processing a sequence with recurrent neural network or convolutional neural network based encoders.

Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese Pre-trained Language Models

2 code implementations NAACL 2021 Yuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, Dongyan Zhao

Further analysis shows that Lattice-BERT can harness the lattice structures, and the improvement comes from the exploration of redundant information and multi-granularity representations.

Natural Language Understanding Pretrained Language Models

Exploring Question-Specific Rewards for Generating Deep Questions

1 code implementation COLING 2020 Yuxi Xie, Liangming Pan, Dongzhe Wang, Min-Yen Kan, Yansong Feng

Recent question generation (QG) approaches often utilize the sequence-to-sequence framework (Seq2Seq) to optimize the log-likelihood of ground-truth questions using teacher forcing.

Question Generation

Towards Context-Aware Code Comment Generation

no code implementations Findings of the Association for Computational Linguistics 2020 Xiaohan Yu, Quzhe Huang, Zheng Wang, Yansong Feng, Dongyan Zhao

Code comments are vital for software maintenance and comprehension, but many software projects suffer from the lack of meaningful and up-to-date comments in practice.

Code Comment Generation Graph Attention

Domain Adaptation for Semantic Parsing

no code implementations23 Jun 2020 Zechang Li, Yuxuan Lai, Yansong Feng, Dongyan Zhao

In this paper, we propose a novel semantic parser for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.

Domain Adaptation Semantic Parsing

Neighborhood Matching Network for Entity Alignment

1 code implementation ACL 2020 Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Dongyan Zhao

This paper presents Neighborhood Matching Network (NMN), a novel entity alignment framework for tackling the structural heterogeneity challenge.

Entity Alignment Graph Sampling +1

Semantic Graphs for Generating Deep Questions

1 code implementation ACL 2020 Liangming Pan, Yuxi Xie, Yansong Feng, Tat-Seng Chua, Min-Yen Kan

This paper proposes the problem of Deep Question Generation (DQG), which aims to generate complex questions that require reasoning over multiple pieces of information of the input passage.

Question Generation

Coordinated Reasoning for Cross-Lingual Knowledge Graph Alignment

no code implementations23 Jan 2020 Kun Xu, Linfeng Song, Yansong Feng, Yan Song, Dong Yu

Existing entity alignment methods mainly vary on the choices of encoding the knowledge graph, but they typically use the same decoding method, which independently chooses the local optimal match for each source entity.

Entity Alignment

Paraphrase Generation with Latent Bag of Words

2 code implementations NeurIPS 2019 Yao Fu, Yansong Feng, John P. Cunningham

Inspired by variational autoencoders with discrete latent structures, in this work, we propose a latent bag of words (BOW) model for paraphrase generation.

Paraphrase Generation Word Embeddings

Integrating Relation Constraints with Neural Relation Extractors

1 code implementation26 Nov 2019 Yuan Ye, Yansong Feng, Bingfeng Luo, Yuxuan Lai, Dongyan Zhao

However, such models often make predictions for each entity pair individually, thus often fail to solve the inconsistency among different predictions, which can be characterized by discrete relation constraints.

Relation Extraction

Easy First Relation Extraction with Information Redundancy

no code implementations IJCNLP 2019 Shuai Ma, Gang Wang, Yansong Feng, Jinpeng Huai

Many existing relation extraction (RE) models make decisions globally using integer linear programming (ILP).

Relation Extraction

Learning to Update Knowledge Graphs by Reading News

no code implementations IJCNLP 2019 Jizhi Tang, Yansong Feng, Dongyan Zhao

News streams contain rich up-to-date information which can be used to update knowledge graphs (KGs).

Knowledge Graphs

Jointly Learning Entity and Relation Representations for Entity Alignment

1 code implementation IJCNLP 2019 Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Dongyan Zhao

Entity alignment is a viable means for integrating heterogeneous knowledge among different knowledge graphs (KGs).

Ranked #10 on Entity Alignment on DBP15k zh-en (using extra training data)

Entity Alignment Entity Embeddings +1

A Sketch-Based System for Semantic Parsing

1 code implementation2 Sep 2019 Zechang Li, Yuxuan Lai, Yuxi Xie, Yansong Feng, Dongyan Zhao

The sketch is a high-level structure of the logical form exclusive of low-level details such as entities and predicates.

Semantic Parsing

Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs

1 code implementation22 Aug 2019 Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, Dongyan Zhao

Entity alignment is the task of linking entities with the same real-world identity from different knowledge graphs (KGs), which has been recently dominated by embedding-based methods.

Ranked #12 on Entity Alignment on DBP15k zh-en (using extra training data)

Entity Alignment Entity Embeddings +1

Enhancing Key-Value Memory Neural Networks for Knowledge Based Question Answering

no code implementations NAACL 2019 Kun Xu, Yuxuan Lai, Yansong Feng, Zhiguo Wang

However, extending KV-MemNNs to Knowledge Based Question Answering (KB-QA) is not trivia, which should properly decompose a complex question into a sequence of queries against the memory, and update the query representations to support multi-hop reasoning over the memory.

Question Answering Reading Comprehension +1

Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network

2 code implementations ACL 2019 Kun Xu, Li-Wei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang, Dong Yu

Previous cross-lingual knowledge graph (KG) alignment studies rely on entity embeddings derived only from monolingual KG structural information, which may fail at matching entities that have different facts in two KGs.

Entity Embeddings Graph Attention +1

Lattice CNNs for Matching Based Chinese Question Answering

1 code implementation25 Feb 2019 Yuxuan Lai, Yansong Feng, Xiaohan Yu, Zheng Wang, Kun Xu, Dongyan Zhao

Short text matching often faces the challenges that there are great word mismatch and expression diversity between the two texts, which would be further aggravated in languages like Chinese where there is no natural space to segment words explicitly.

Question Answering Text Matching

Encoding Implicit Relation Requirements for Relation Extraction: A Joint Inference Approach

no code implementations9 Nov 2018 Li-Wei Chen, Yansong Feng, Songfang Huang, Bingfeng Luo, Dongyan Zhao

Relation extraction is the task of identifying predefined relationship between entities, and plays an essential role in information extraction, knowledge base construction, question answering and so on.

Question Answering Relation Extraction

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

no code implementations21 Oct 2018 Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang

We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics.

Image Classification Model Compression +1

Overview of CAIL2018: Legal Judgment Prediction Competition

2 code implementations13 Oct 2018 Haoxi Zhong, Chaojun Xiao, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, Jianfeng Xu

In this paper, we give an overview of the Legal Judgment Prediction (LJP) competition at Chinese AI and Law challenge (CAIL2018).

SQL-to-Text Generation with Graph-to-Sequence Model

1 code implementation EMNLP 2018 Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Vadim Sheinin

Previous work approaches the SQL-to-text generation task using vanilla Seq2Seq models, which may not fully capture the inherent graph-structured information in SQL query.

Graph-to-Sequence SQL-to-Text +1

Improving Matching Models with Hierarchical Contextualized Representations for Multi-turn Response Selection

no code implementations22 Aug 2018 Chongyang Tao, Wei Wu, Can Xu, Yansong Feng, Dongyan Zhao, Rui Yan

In this paper, we study context-response matching with pre-trained contextualized representations for multi-turn response selection in retrieval-based chatbots.

Dialogue Generation

CAIL2018: A Large-Scale Legal Dataset for Judgment Prediction

3 code implementations4 Jul 2018 Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, Jianfeng Xu

In this paper, we introduce the \textbf{C}hinese \textbf{AI} and \textbf{L}aw challenge dataset (CAIL2018), the first large-scale Chinese legal dataset for judgment prediction.

Text Classification

Natural Answer Generation with Heterogeneous Memory

no code implementations NAACL 2018 Yao Fu, Yansong Feng

Memory augmented encoder-decoder framework has achieved promising progress for natural language generation tasks.

Answer Generation Question Answering +1

Marrying up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding

no code implementations ACL 2018 Bingfeng Luo, Yansong Feng, Zheng Wang, Songfang Huang, Rui Yan, Dongyan Zhao

The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data.

Intent Detection Slot Filling +1

Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks

4 code implementations ICLR 2019 Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Michael Witbrock, Vadim Sheinin

Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.

Graph-to-Sequence SQL-to-Text +1

Scale Up Event Extraction Learning via Automatic Training Data Generation

no code implementations11 Dec 2017 Ying Zeng, Yansong Feng, Rong Ma, Zheng Wang, Rui Yan, Chongde Shi, Dongyan Zhao

We show that this large volume of training data not only leads to a better event extractor, but also allows us to detect multiple typed events.

Event Extraction

Learning to Predict Charges for Criminal Cases with Legal Basis

no code implementations EMNLP 2017 Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, Dongyan Zhao

The charge prediction task is to determine appropriate charges for a given case, which is helpful for legal assistant systems where the user input is fact description.

A Constrained Sequence-to-Sequence Neural Model for Sentence Simplification

no code implementations7 Apr 2017 Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, Rui Yan

For word-level studies, words are simplified but also have potential grammar errors due to different usages of words before and after simplification.

Hybrid Question Answering over Knowledge Base and Free Text

no code implementations COLING 2016 Kun Xu, Yansong Feng, Songfang Huang, Dongyan Zhao

While these systems are able to provide more precise answers than information retrieval (IR) based QA systems, the natural incompleteness of KB inevitably limits the question scope that the system can answer.

Information Retrieval Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.