Search Results for author: Yansong Feng

Found 77 papers, 42 papers with code

Dual-Channel Evidence Fusion for Fact Verification over Texts and Tables

no code implementations NAACL 2022 Nan Hu, Zirui Wu, Yuxuan Lai, Xiao Liu, Yansong Feng

Different from previous fact extraction and verification tasks that only consider evidence of a single format, FEVEROUS brings further challenges by extending the evidence format to both plain text and tables.

Fact Verification

Understanding Procedural Text using Interactive Entity Networks

no code implementations EMNLP 2020 Jizhi Tang, Yansong Feng, Dongyan Zhao

Recent efforts have made great progress to track multiple entities in a procedural text, but usually treat each entity separately and ignore the fact that there are often multiple entities interacting with each other during one process, some of which are even explicitly mentioned.

Reading Comprehension

Motion Generation from Fine-grained Textual Descriptions

1 code implementation20 Mar 2024 Kunhang Li, Yansong Feng

The task of text2motion is to generate human motion sequences from given textual descriptions, where the model explores diverse mappings from natural language instructions to human body movements.

Harder Tasks Need More Experts: Dynamic Routing in MoE Models

1 code implementation12 Mar 2024 Quzhe Huang, Zhenwei An, Nan Zhuang, Mingxu Tao, Chen Zhang, Yang Jin, Kun Xu, Liwei Chen, Songfang Huang, Yansong Feng

In this paper, we introduce a novel dynamic expert selection framework for Mixture of Experts (MoE) models, aiming to enhance computational efficiency and model performance by adjusting the number of activated experts based on input difficulty.

Computational Efficiency

ProTrix: Building Models for Planning and Reasoning over Tables with Sentence Context

1 code implementation4 Mar 2024 Zirui Wu, Yansong Feng

Our work underscores the importance of the planning and reasoning abilities towards a model over tabular tasks with generalizability and interpretability.

Sentence

Teaching Large Language Models an Unseen Language on the Fly

1 code implementation29 Feb 2024 Chen Zhang, Xiao Liu, Jiuheng Lin, Yansong Feng

Existing large language models struggle to support numerous low-resource languages, particularly the extremely low-resource ones where there is minimal training data available for effective parameter updating.

In-Context Learning Translation

Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data

1 code implementation27 Feb 2024 Xiao Liu, Zirui Wu, Xueqing Wu, Pan Lu, Kai-Wei Chang, Yansong Feng

To address this gap, we introduce the Quantitative Reasoning with Data (QRData) benchmark, aiming to evaluate Large Language Models' capability in statistical and causal reasoning with real-world data.

Benchmarking

Probing Multimodal Large Language Models for Global and Local Semantic Representations

1 code implementation27 Feb 2024 Mingxu Tao, Quzhe Huang, Kun Xu, Liwei Chen, Yansong Feng, Dongyan Zhao

The advancement of Multimodal Large Language Models (MLLMs) has greatly accelerated the development of applications in understanding integrated texts and images.

object-detection Object Detection +1

Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering

no code implementations26 Feb 2024 Mingxu Tao, Dongyan Zhao, Yansong Feng

Open-ended question answering requires models to find appropriate evidence to form well-reasoned, comprehensive and helpful answers.

Evidence Selection Open-Ended Question Answering +1

CASA: Causality-driven Argument Sufficiency Assessment

1 code implementation10 Jan 2024 Xiao Liu, Yansong Feng, Kai-Wei Chang

Motivated by the probability of sufficiency (PS) definition in the causal literature, we propose CASA, a zero-shot causality-driven argument sufficiency assessment framework.

Logical Fallacy Detection

MC^2: A Multilingual Corpus of Minority Languages in China

1 code implementation14 Nov 2023 Chen Zhang, Mingxu Tao, Quzhe Huang, Jiuheng Lin, Zhibin Chen, Yansong Feng

However, existing LLMs exhibit limited abilities in understanding low-resource languages, including the minority languages in China, due to a lack of training data.

From the One, Judge of the Whole: Typed Entailment Graph Construction with Predicate Generation

1 code implementation7 Jun 2023 Zhibin Chen, Yansong Feng, Dongyan Zhao

Entailment Graphs (EGs) have been constructed based on extracted corpora as a strong and explainable form to indicate context-independent entailment relations in natural languages.

graph construction

How Many Answers Should I Give? An Empirical Study of Multi-Answer Reading Comprehension

1 code implementation1 Jun 2023 Chen Zhang, Jiuheng Lin, Xiao Liu, Yuxuan Lai, Yansong Feng, Dongyan Zhao

We further analyze how well different paradigms of current multi-answer MRC models deal with different types of multi-answer instances.

Machine Reading Comprehension

More than Classification: A Unified Framework for Event Temporal Relation Extraction

no code implementations28 May 2023 Quzhe Huang, Yutong Hu, Shengqi Zhu, Yansong Feng, Chang Liu, Dongyan Zhao

After examining the relation definitions in various ETRE tasks, we observe that all relations can be interpreted using the start and end time points of events.

Multi-Label Classification Relation +1

Lawyer LLaMA Technical Report

1 code implementation24 May 2023 Quzhe Huang, Mingxu Tao, Chen Zhang, Zhenwei An, Cong Jiang, Zhibin Chen, Zirui Wu, Yansong Feng

Specifically, we inject domain knowledge during the continual training stage and teach the model to learn professional skills using properly designed supervised fine-tuning tasks.

Hallucination Retrieval

A Frustratingly Easy Improvement for Position Embeddings via Random Padding

no code implementations8 May 2023 Mingxu Tao, Yansong Feng, Dongyan Zhao

Since the embeddings of rear positions are updated fewer times than the front position embeddings, the rear ones may not be properly trained.

Extractive Question-Answering Position +1

Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study

1 code implementation2 Mar 2023 Mingxu Tao, Yansong Feng, Dongyan Zhao

Large pre-trained language models help to achieve state of the art on a variety of natural language processing (NLP) tasks, nevertheless, they still suffer from forgetting when incrementally learning a sequence of tasks.

Extractive Question-Answering Incremental Learning +3

Cross-Lingual Question Answering over Knowledge Base as Reading Comprehension

1 code implementation26 Feb 2023 Chen Zhang, Yuxuan Lai, Yansong Feng, Xingyu Shen, Haowei Du, Dongyan Zhao

We convert KB subgraphs into passages to narrow the gap between KB schemas and questions, which enables our model to benefit from recent advances in multilingual pre-trained language models (MPLMs) and cross-lingual machine reading comprehension (xMRC).

Cross-Lingual Question Answering Machine Reading Comprehension

Do Charge Prediction Models Learn Legal Theory?

1 code implementation31 Oct 2022 Zhenwei An, Quzhe Huang, Cong Jiang, Yansong Feng, Dongyan Zhao

The charge prediction task aims to predict the charge for a case given its fact description.

Counterfactual Recipe Generation: Exploring Compositional Generalization in a Realistic Scenario

1 code implementation20 Oct 2022 Xiao Liu, Yansong Feng, Jizhi Tang, Chengang Hu, Dongyan Zhao

Although pretrained language models can generate fluent recipe texts, they fail to truly learn and use the culinary knowledge in a compositional way.

counterfactual Recipe Generation

Improve Discourse Dependency Parsing with Contextualized Representations

no code implementations Findings (NAACL) 2022 Yifei Zhou, Yansong Feng

Recent works show that discourse analysis benefits from modeling intra- and inter-sentential levels separately, where proper representations for text units of different granularities are desired to capture both the meaning of text units and their relations to the context.

Dependency Parsing

Entailment Graph Learning with Textual Entailment and Soft Transitivity

1 code implementation ACL 2022 Zhibin Chen, Yansong Feng, Dongyan Zhao

Typed entailment graphs try to learn the entailment relations between predicates from text and model them as edges between predicate nodes.

Graph Learning Natural Language Inference

Things not Written in Text: Exploring Spatial Commonsense from Visual Signals

1 code implementation ACL 2022 Xiao Liu, Da Yin, Yansong Feng, Dongyan Zhao

We probe PLMs and models with visual signals, including vision-language pretrained models and image synthesis models, on this benchmark, and find that image synthesis models are more capable of learning accurate and consistent spatial knowledge than other models.

Image Generation Natural Language Understanding +1

Extract, Integrate, Compete: Towards Verification Style Reading Comprehension

1 code implementation Findings (EMNLP) 2021 Chen Zhang, Yuxuan Lai, Yansong Feng, Dongyan Zhao

In this paper, we present a new verification style reading comprehension dataset named VGaokao from Chinese Language tests of Gaokao.

Reading Comprehension

Exploring Distantly-Labeled Rationales in Neural Network Models

no code implementations ACL 2021 Quzhe Huang, Shengqi Zhu, Yansong Feng, Dongyan Zhao

Recent studies strive to incorporate various human rationales into neural networks to improve model performance, but few pay attention to the quality of the rationales.

Why Machine Reading Comprehension Models Learn Shortcuts?

1 code implementation Findings (ACL) 2021 Yuxuan Lai, Chen Zhang, Yansong Feng, Quzhe Huang, Dongyan Zhao

A thorough empirical analysis shows that MRC models tend to learn shortcut questions earlier than challenging questions, and the high proportions of shortcut questions in training sets hinder models from exploring the sophisticated reasoning skills in the later stage of training.

Machine Reading Comprehension

Learning to Organize a Bag of Words into Sentences with Neural Networks: An Empirical Study

no code implementations NAACL 2021 Chongyang Tao, Shen Gao, Juntao Li, Yansong Feng, Dongyan Zhao, Rui Yan

Sequential information, a. k. a., orders, is assumed to be essential for processing a sequence with recurrent neural network or convolutional neural network based encoders.

Sentence

Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese Pre-trained Language Models

2 code implementations NAACL 2021 Yuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, Dongyan Zhao

Further analysis shows that Lattice-BERT can harness the lattice structures, and the improvement comes from the exploration of redundant information and multi-granularity representations.

Natural Language Understanding Sentence

Exploring Question-Specific Rewards for Generating Deep Questions

1 code implementation COLING 2020 Yuxi Xie, Liangming Pan, Dongzhe Wang, Min-Yen Kan, Yansong Feng

Recent question generation (QG) approaches often utilize the sequence-to-sequence framework (Seq2Seq) to optimize the log-likelihood of ground-truth questions using teacher forcing.

Question Generation Question-Generation

Towards Context-Aware Code Comment Generation

no code implementations Findings of the Association for Computational Linguistics 2020 Xiaohan Yu, Quzhe Huang, Zheng Wang, Yansong Feng, Dongyan Zhao

Code comments are vital for software maintenance and comprehension, but many software projects suffer from the lack of meaningful and up-to-date comments in practice.

Code Comment Generation Comment Generation +1

Domain Adaptation for Semantic Parsing

no code implementations23 Jun 2020 Zechang Li, Yuxuan Lai, Yansong Feng, Dongyan Zhao

In this paper, we propose a novel semantic parser for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.

Domain Adaptation Semantic Parsing

Neighborhood Matching Network for Entity Alignment

1 code implementation ACL 2020 Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Dongyan Zhao

This paper presents Neighborhood Matching Network (NMN), a novel entity alignment framework for tackling the structural heterogeneity challenge.

Entity Alignment Graph Sampling +1

Semantic Graphs for Generating Deep Questions

1 code implementation ACL 2020 Liangming Pan, Yuxi Xie, Yansong Feng, Tat-Seng Chua, Min-Yen Kan

This paper proposes the problem of Deep Question Generation (DQG), which aims to generate complex questions that require reasoning over multiple pieces of information of the input passage.

Question Generation Question-Generation

Coordinated Reasoning for Cross-Lingual Knowledge Graph Alignment

no code implementations23 Jan 2020 Kun Xu, Linfeng Song, Yansong Feng, Yan Song, Dong Yu

Existing entity alignment methods mainly vary on the choices of encoding the knowledge graph, but they typically use the same decoding method, which independently chooses the local optimal match for each source entity.

Entity Alignment

Paraphrase Generation with Latent Bag of Words

2 code implementations NeurIPS 2019 Yao Fu, Yansong Feng, John P. Cunningham

Inspired by variational autoencoders with discrete latent structures, in this work, we propose a latent bag of words (BOW) model for paraphrase generation.

Paraphrase Generation Word Embeddings

Integrating Relation Constraints with Neural Relation Extractors

1 code implementation26 Nov 2019 Yuan Ye, Yansong Feng, Bingfeng Luo, Yuxuan Lai, Dongyan Zhao

However, such models often make predictions for each entity pair individually, thus often fail to solve the inconsistency among different predictions, which can be characterized by discrete relation constraints.

Relation Relation Extraction

Learning to Update Knowledge Graphs by Reading News

no code implementations IJCNLP 2019 Jizhi Tang, Yansong Feng, Dongyan Zhao

News streams contain rich up-to-date information which can be used to update knowledge graphs (KGs).

Knowledge Graphs

Jointly Learning Entity and Relation Representations for Entity Alignment

1 code implementation IJCNLP 2019 Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Dongyan Zhao

Entity alignment is a viable means for integrating heterogeneous knowledge among different knowledge graphs (KGs).

Ranked #18 on Entity Alignment on DBP15k zh-en (using extra training data)

Entity Alignment Entity Embeddings +2

A Sketch-Based System for Semantic Parsing

1 code implementation2 Sep 2019 Zechang Li, Yuxuan Lai, Yuxi Xie, Yansong Feng, Dongyan Zhao

The sketch is a high-level structure of the logical form exclusive of low-level details such as entities and predicates.

Semantic Parsing Task 2

Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs

1 code implementation22 Aug 2019 Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, Dongyan Zhao

Entity alignment is the task of linking entities with the same real-world identity from different knowledge graphs (KGs), which has been recently dominated by embedding-based methods.

Ranked #20 on Entity Alignment on DBP15k zh-en (using extra training data)

Entity Alignment Entity Embeddings +2

Enhancing Key-Value Memory Neural Networks for Knowledge Based Question Answering

no code implementations NAACL 2019 Kun Xu, Yuxuan Lai, Yansong Feng, Zhiguo Wang

However, extending KV-MemNNs to Knowledge Based Question Answering (KB-QA) is not trivia, which should properly decompose a complex question into a sequence of queries against the memory, and update the query representations to support multi-hop reasoning over the memory.

Question Answering Reading Comprehension +1

Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network

1 code implementation ACL 2019 Kun Xu, Li-Wei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang, Dong Yu

Previous cross-lingual knowledge graph (KG) alignment studies rely on entity embeddings derived only from monolingual KG structural information, which may fail at matching entities that have different facts in two KGs.

Entity Embeddings Graph Attention +1

Lattice CNNs for Matching Based Chinese Question Answering

1 code implementation25 Feb 2019 Yuxuan Lai, Yansong Feng, Xiaohan Yu, Zheng Wang, Kun Xu, Dongyan Zhao

Short text matching often faces the challenges that there are great word mismatch and expression diversity between the two texts, which would be further aggravated in languages like Chinese where there is no natural space to segment words explicitly.

Question Answering Text Matching

Encoding Implicit Relation Requirements for Relation Extraction: A Joint Inference Approach

no code implementations9 Nov 2018 Li-Wei Chen, Yansong Feng, Songfang Huang, Bingfeng Luo, Dongyan Zhao

Relation extraction is the task of identifying predefined relationship between entities, and plays an essential role in information extraction, knowledge base construction, question answering and so on.

Question Answering Relation +1

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

no code implementations21 Oct 2018 Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang

We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics.

Image Classification Model Compression +1

Overview of CAIL2018: Legal Judgment Prediction Competition

2 code implementations13 Oct 2018 Haoxi Zhong, Chaojun Xiao, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, Jianfeng Xu

In this paper, we give an overview of the Legal Judgment Prediction (LJP) competition at Chinese AI and Law challenge (CAIL2018).

SQL-to-Text Generation with Graph-to-Sequence Model

1 code implementation EMNLP 2018 Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Vadim Sheinin

Previous work approaches the SQL-to-text generation task using vanilla Seq2Seq models, which may not fully capture the inherent graph-structured information in SQL query.

Graph-to-Sequence SQL-to-Text +1

Improving Matching Models with Hierarchical Contextualized Representations for Multi-turn Response Selection

no code implementations22 Aug 2018 Chongyang Tao, Wei Wu, Can Xu, Yansong Feng, Dongyan Zhao, Rui Yan

In this paper, we study context-response matching with pre-trained contextualized representations for multi-turn response selection in retrieval-based chatbots.

Dialogue Generation Retrieval +1

CAIL2018: A Large-Scale Legal Dataset for Judgment Prediction

3 code implementations4 Jul 2018 Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, Jianfeng Xu

In this paper, we introduce the \textbf{C}hinese \textbf{AI} and \textbf{L}aw challenge dataset (CAIL2018), the first large-scale Chinese legal dataset for judgment prediction.

Text Classification

Natural Answer Generation with Heterogeneous Memory

no code implementations NAACL 2018 Yao Fu, Yansong Feng

Memory augmented encoder-decoder framework has achieved promising progress for natural language generation tasks.

Answer Generation Question Answering +2

Marrying up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding

no code implementations ACL 2018 Bingfeng Luo, Yansong Feng, Zheng Wang, Songfang Huang, Rui Yan, Dongyan Zhao

The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data.

Intent Detection slot-filling +2

Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks

4 code implementations ICLR 2019 Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Michael Witbrock, Vadim Sheinin

Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.

Graph-to-Sequence SQL-to-Text +1

Scale Up Event Extraction Learning via Automatic Training Data Generation

no code implementations11 Dec 2017 Ying Zeng, Yansong Feng, Rong Ma, Zheng Wang, Rui Yan, Chongde Shi, Dongyan Zhao

We show that this large volume of training data not only leads to a better event extractor, but also allows us to detect multiple typed events.

Event Extraction

Learning to Predict Charges for Criminal Cases with Legal Basis

no code implementations EMNLP 2017 Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, Dongyan Zhao

The charge prediction task is to determine appropriate charges for a given case, which is helpful for legal assistant systems where the user input is fact description.

A Constrained Sequence-to-Sequence Neural Model for Sentence Simplification

no code implementations7 Apr 2017 Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, Rui Yan

For word-level studies, words are simplified but also have potential grammar errors due to different usages of words before and after simplification.

Sentence

Hybrid Question Answering over Knowledge Base and Free Text

no code implementations COLING 2016 Kun Xu, Yansong Feng, Songfang Huang, Dongyan Zhao

While these systems are able to provide more precise answers than information retrieval (IR) based QA systems, the natural incompleteness of KB inevitably limits the question scope that the system can answer.

Information Retrieval Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.