Search Results for author: Jinan Xu

Found 62 papers, 31 papers with code

Syntactically Diverse Adversarial Network for Knowledge-Grounded Conversation Generation

no code implementations Findings (EMNLP) 2021 Fuwei Cui, Hui Di, Hongjie Ren, Kazushige Ouchi, Ze Liu, Jinan Xu

Generative conversation systems tend to produce meaningless and generic responses, which significantly reduce the user experience.


Learning Structural Information for Syntax-Controlled Paraphrase Generation

no code implementations Findings (NAACL) 2022 Erguang Yang, Chenglin Bai, Deyi Xiong, Yujie Zhang, Yao Meng, Jinan Xu, Yufeng Chen

To model the alignment relation between words and nodes, we propose an attention regularization objective, which makes the decoder accurately select corresponding syntax nodes to guide the generation of words. Experiments show that SI-SCP achieves state-of-the-art performances in terms of semantic and syntactic quality on two popular benchmark datasets. Additionally, we propose a Syntactic Template Retriever (STR) to retrieve compatible syntactic structures.

Paraphrase Generation Relation

基于多任务标签一致性机制的中文命名实体识别(Chinese Named Entity Recognition based on Multi-task Label Consistency Mechanism)

no code implementations CCL 2021 Shuning Lv, Jian Liu, Jinan Xu, Yufeng Chen, Yujie Zhang

“实体边界预测对中文命名实体识别至关重要。现有研究为改善边界识别效果提出的多任务学习方法仅考虑与分词任务结合, 缺少多任务标签训练数据, 无法学到任务的标签一致性关系。本文提出一种新的基于多任务标签一致性机制的中文命名实体识别方法:将分词和词性信息融入命名实体识别模型, 使三种任务联合训练;建立基于标签一致性机制的多任务学习模式, 来捕获标签一致性关系及学习多任务表示。全样本和小样本实验表明了方法的有效性。”

Chinese Named Entity Recognition named-entity-recognition +1

Iterative Constrained Back-Translation for Unsupervised Domain Adaptation of Machine Translation

1 code implementation COLING 2022 Hongxiao Zhang, Hui Huang, Jiale Gao, Yufeng Chen, Jinan Xu, Jian Liu

In this paper, we propose an Iterative Constrained Back-Translation (ICBT) method to incorporate in-domain lexical knowledge on the basis of BT for unsupervised domain adaptation of NMT.

Machine Translation NMT +4

Saliency as Evidence: Event Detection with Trigger Saliency Attribution

1 code implementation ACL 2022 Jian Liu, Yufeng Chen, Jinan Xu

Event detection (ED) is a critical subtask of event extraction that seeks to identify event triggers of certain types in texts. Despite significant advances in ED, existing methods typically follow a “one model fits all types” approach, which sees no differences between event types and often results in a quite skewed performance. Finding the causes of skewed performance is crucial for the robustness of an ED model, but to date there has been little exploration of this problem. This research examines the issue in depth and presents a new concept termed trigger salience attribution, which can explicitly quantify the underlying patterns of events.

Event Detection Event Extraction

Machine Reading Comprehension as Data Augmentation: A Case Study on Implicit Event Argument Extraction

no code implementations EMNLP 2021 Jian Liu, Yufeng Chen, Jinan Xu

Implicit event argument extraction (EAE) is a crucial document-level information extraction task that aims to identify event arguments beyond the sentence level.

Data Augmentation Event Argument Extraction +3

融合外部知识的开放域复述模板获取方法(An Open Domain Paraphrasing Template Acquisition Method Based on External Knowledge)

no code implementations CCL 2021 Bo Jin, Mingtong Liu, Yujie Zhang, Jinan Xu, Yufeng Chen

“如何挖掘语言资源中丰富的复述模板, 是复述研究中的一项重要任务。已有方法在人工给定种子实体对的基础上, 利用实体关系, 通过自举迭代方式, 从开放域获取复述模板, 规避对平行语料或可比语料的依赖, 但是该方法需人工给定实体对, 实体关系受限;在迭代过程中语义会发生偏移, 影响获取质量。针对这些问题, 我们考虑知识库中包含描述特定语义关系的实体对(即关系三元组), 提出融合外部知识的开放域复述模板自动获取方法。首先, 将关系三元组与开放域文本对齐, 获取关系对应文本, 并将文本中语义丰富部分泛化成变量槽, 获取关系模板;接着设计模板表示方法, 本文利用预训练语言模型, 在模板表示中融合变量槽语义;最后, 根据获得的模板表示, 设计自动聚类与筛选方法, 获取高精度的复述模板。在融合自动评测与人工评测的评价方法下, 实验结果表明, 本文提出的方法实现了在开放域数据上复述模板的自动泛化与获取, 能够获得质量高、语义一致的复述模板。”

A Joint Model for Graph-based Chinese Dependency Parsing

no code implementations CCL 2020 Xingchen Li, Mingtong Liu, Yujie Zhang, Jinan Xu, Yufeng Chen

The experimental results on the Penn Chinese treebank (CTB5) show that our proposed joint model improved by 0. 38% on dependency parsing than the model of Yan et al. (2019).

Chinese Dependency Parsing Chinese Word Segmentation +5

Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective

1 code implementation11 Apr 2024 Yijie Chen, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou

In this paper, we suggest that code comments are the natural logic pivot between natural language and code language and propose using comments to boost the code generation ability of code LLMs.

Code Generation

Towards Comprehensive Multimodal Perception: Introducing the Touch-Language-Vision Dataset

no code implementations14 Mar 2024 Ning Cheng, You Li, Jing Gao, Bin Fang, Jinan Xu, Wenjuan Han

Tactility provides crucial support and enhancement for the perception and interaction capabilities of both humans and robots.


TransportationGames: Benchmarking Transportation Knowledge of (Multimodal) Large Language Models

no code implementations9 Jan 2024 Xue Zhang, Xiangyu Shi, Xinyue Lou, Rui Qi, Yufeng Chen, Jinan Xu, Wenjuan Han

Large language models (LLMs) and multimodal large language models (MLLMs) have shown excellent general capabilities, even exhibiting adaptability in many professional domains such as law, economics, transportation, and medicine.


Towards Faster k-Nearest-Neighbor Machine Translation

no code implementations12 Dec 2023 Xiangyu Shi, Yunlong Liang, Jinan Xu, Yufeng Chen

Recent works have proven the effectiveness of k-nearest-neighbor machine translation(a. k. a kNN-MT) approaches to produce remarkable improvement in cross-domain translations.

Machine Translation Retrieval +1

A Quality-based Syntactic Template Retriever for Syntactically-controlled Paraphrase Generation

1 code implementation20 Oct 2023 Xue Zhang, Songming Zhang, Yunlong Liang, Yufeng Chen, Jian Liu, Wenjuan Han, Jinan Xu

Furthermore, for situations requiring multiple paraphrases for each source sentence, we design a Diverse Templates Search (DTS) algorithm, which can enhance the diversity between paraphrases without sacrificing quality.

Data Augmentation Paraphrase Generation +2

Improving Translation Faithfulness of Large Language Models via Augmenting Instructions

1 code implementation24 Aug 2023 Yijie Chen, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou

The experimental results demonstrate significant improvements in translation performance with SWIE based on BLOOMZ-3b, particularly in zero-shot and long text translations due to reduced instruction forgetting risk.

Instruction Following Machine Translation +2

CollabKG: A Learnable Human-Machine-Cooperative Information Extraction Toolkit for (Event) Knowledge Graph Construction

1 code implementation3 Jul 2023 Xiang Wei, Yufeng Chen, Ning Cheng, Xingyu Cui, Jinan Xu, Wenjuan Han

In order to construct or extend entity-centric and event-centric knowledge graphs (KG and EKG), the information extraction (IE) annotation toolkit is essential.

graph construction Knowledge Graphs +3

Towards Understanding and Improving Knowledge Distillation for Neural Machine Translation

1 code implementation14 May 2023 Songming Zhang, Yunlong Liang, Shuaibo Wang, Wenjuan Han, Jian Liu, Jinan Xu, Yufeng Chen

In this work, we first unravel this mystery from an empirical perspective and show that the knowledge comes from the top-1 predictions of teachers, which also helps us build a potential connection between word- and sequence-level KD.

Knowledge Distillation Machine Translation +2

RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training

no code implementations13 May 2023 Chulun Zhou, Yunlong Liang, Fandong Meng, Jinan Xu, Jinsong Su, Jie zhou

In this paper, we propose Regularized Contrastive Cross-lingual Cross-modal (RC^3) pre-training, which further exploits more abundant weakly-aligned multilingual image-text pairs.

Contrastive Learning Machine Translation

Unified Model Learning for Various Neural Machine Translation

no code implementations4 May 2023 Yunlong Liang, Fandong Meng, Jinan Xu, Jiaan Wang, Yufeng Chen, Jie zhou

Specifically, we propose a ``versatile'' model, i. e., the Unified Model Learning for NMT (UMLNMT) that works with data from different tasks, and can translate well in multiple settings simultaneously, and theoretically it can be as many as possible.

Document Translation Machine Translation +3

Is ChatGPT a Good NLG Evaluator? A Preliminary Study

1 code implementation7 Mar 2023 Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, Jie zhou

In detail, we regard ChatGPT as a human evaluator and give task-specific (e. g., summarization) and aspect-specific (e. g., relevance) instruction to prompt ChatGPT to evaluate the generated results of NLG models.

nlg evaluation Story Generation

A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation

no code implementations27 Jan 2023 Chulun Zhou, Yunlong Liang, Fandong Meng, Jie zhou, Jinan Xu, Hongji Wang, Min Zhang, Jinsong Su

To address these issues, in this paper, we propose a multi-task multi-stage transitional (MMT) training framework, where an NCT model is trained using the bilingual chat translation dataset and additional monolingual dialogues.

NMT Sentence +1

Summary-Oriented Vision Modeling for Multimodal Abstractive Summarization

1 code implementation15 Dec 2022 Yunlong Liang, Fandong Meng, Jinan Xu, Jiaan Wang, Yufeng Chen, Jie zhou

However, less attention has been paid to the visual features from the perspective of the summary, which may limit the model performance, especially in the low- and zero-resource scenarios.

Abstractive Text Summarization

Cross-Align: Modeling Deep Cross-lingual Interactions for Word Alignment

1 code implementation9 Oct 2022 Siyu Lai, Zhen Yang, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou

Word alignment which aims to extract lexicon translation equivalents between source and target sentences, serves as a fundamental tool for natural language processing.

Language Modelling Sentence +2

Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation

1 code implementation NAACL 2022 Siyu Lai, Zhen Yang, Fandong Meng, Xue Zhang, Yufeng Chen, Jinan Xu, Jie zhou

Generating adversarial examples for Neural Machine Translation (NMT) with single Round-Trip Translation (RTT) has achieved promising results by releasing the meaning-preserving restriction.

Machine Translation NMT +1

A Variational Hierarchical Model for Neural Cross-Lingual Summarization

1 code implementation ACL 2022 Yunlong Liang, Fandong Meng, Chulun Zhou, Jinan Xu, Yufeng Chen, Jinsong Su, Jie zhou

The goal of the cross-lingual summarization (CLS) is to convert a document in one language (e. g., English) to a summary in another one (e. g., Chinese).

Machine Translation Translation

Conditional Bilingual Mutual Information Based Adaptive Training for Neural Machine Translation

1 code implementation ACL 2022 Songming Zhang, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jian Liu, Jie zhou

Token-level adaptive training approaches can alleviate the token imbalance problem and thus improve neural machine translation, through re-weighting the losses of different target tokens based on specific statistical metrics (e. g., token frequency or mutual information).

Language Modelling Machine Translation +2

MSCTD: A Multimodal Sentiment Chat Translation Dataset

1 code implementation ACL 2022 Yunlong Liang, Fandong Meng, Jinan Xu, Yufeng Chen, Jie zhou

In this work, we introduce a new task named Multimodal Chat Translation (MCT), aiming to generate more accurate translations with the help of the associated dialogue history and visual context.

Multimodal Machine Translation Sentiment Analysis +1

Scheduled Sampling Based on Decoding Steps for Neural Machine Translation

1 code implementation EMNLP 2021 Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou

Its core motivation is to simulate the inference scene during training by replacing ground-truth tokens with predicted tokens, thus bridging the gap between training and inference.

Machine Translation Text Summarization +1

WeChat Neural Machine Translation Systems for WMT21

no code implementations WMT (EMNLP) 2021 Xianfeng Zeng, Yijin Liu, Ernan Li, Qiu Ran, Fandong Meng, Peng Li, Jinan Xu, Jie zhou

This paper introduces WeChat AI's participation in WMT 2021 shared news translation task on English->Chinese, English->Japanese, Japanese->English and English->German.

Knowledge Distillation Machine Translation +3

Modeling Bilingual Conversational Characteristics for Neural Chat Translation

1 code implementation ACL 2021 Yunlong Liang, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou

Despite the impressive performance of sentence-level and context-aware Neural Machine Translation (NMT), there still remain challenges to translate bilingual conversational text due to its inherent characteristics such as role preference, dialogue coherence, and translation consistency.

Machine Translation NMT +2

Confidence-Aware Scheduled Sampling for Neural Machine Translation

1 code implementation Findings (ACL) 2021 Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou

In this way, the model is exactly exposed to predicted tokens for high-confidence positions and still ground-truth tokens for low-confidence positions.

Machine Translation Translation

Target-Oriented Fine-tuning for Zero-Resource Named Entity Recognition

1 code implementation Findings (ACL) 2021 Ying Zhang, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou

In this paper, we tackle the problem by transferring knowledge from three aspects, i. e., domain, language and task, and strengthening connections among them.

named-entity-recognition Named Entity Recognition +2

Bilingual Mutual Information Based Adaptive Training for Neural Machine Translation

1 code implementation ACL 2021 Yangyifan Xu, Yijin Liu, Fandong Meng, Jiajun Zhang, Jinan Xu, Jie zhou

Recently, token-level adaptive training has achieved promising improvement in machine translation, where the cross-entropy loss function is adjusted by assigning different training weights to different tokens, in order to alleviate the token imbalance problem.

Machine Translation Translation

Emotional Conversation Generation with Heterogeneous Graph Neural Network

1 code implementation9 Dec 2020 Yunlong Liang, Fandong Meng, Ying Zhang, Jinan Xu, Yufeng Chen, Jie zhou

Firstly, we design a Heterogeneous Graph-Based Encoder to represent the conversation content (i. e., the dialogue history, its emotion flow, facial expressions, audio, and speakers' personalities) with a heterogeneous graph neural network, and then predict suitable emotions for feedback.

A Learning-Exploring Method to Generate Diverse Paraphrases with Multi-Objective Deep Reinforcement Learning

no code implementations COLING 2020 Mingtong Liu, Erguang Yang, Deyi Xiong, Yujie Zhang, Yao Meng, Changjian Hu, Jinan Xu, Yufeng Chen

We propose a learning-exploring method to generate sentences as learning objectives from the learned data distribution, and employ reinforcement learning to combine these new learning objectives for model training.

Paraphrase Generation Reinforcement Learning (RL)

Multi-view Classification Model for Knowledge Graph Completion

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Wenbin Jiang, Mengfei Guo, Yufeng Chen, Ying Li, Jinan Xu, Yajuan Lyu, Yong Zhu

This paper describes a novel multi-view classification model for knowledge graph completion, where multiple classification views are performed based on both content and context information for candidate triple evaluation.

Classification Knowledge Graph Completion

Modeling Inter-Aspect Dependencies with a Non-temporal Mechanism for Aspect-Based Sentiment Analysis

no code implementations12 Aug 2020 Yunlong Liang, Fandong Meng, Jinchao Zhang, Yufeng Chen, Jinan Xu, Jie zhou

For multiple aspects scenario of aspect-based sentiment analysis (ABSA), existing approaches typically ignore inter-aspect relations or rely on temporal dependencies to process aspect-aware representations of all aspects in a sentence.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +1

Faster Depth-Adaptive Transformers

no code implementations27 Apr 2020 Yijin Liu, Fandong Meng, Jie zhou, Yufeng Chen, Jinan Xu

Depth-adaptive neural networks can dynamically adjust depths according to the hardness of input words, and thus improve efficiency.

Sentence Embeddings text-classification +1

An Iterative Multi-Knowledge Transfer Network for Aspect-Based Sentiment Analysis

2 code implementations Findings (EMNLP) 2021 Yunlong Liang, Fandong Meng, Jinchao Zhang, Yufeng Chen, Jinan Xu, Jie zhou

Aspect-based sentiment analysis (ABSA) mainly involves three subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification, which are typically handled in a separate or joint manner.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3

A Dependency Syntactic Knowledge Augmented Interactive Architecture for End-to-End Aspect-based Sentiment Analysis

3 code implementations4 Apr 2020 Yunlong Liang, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, Jie zhou

The aspect-based sentiment analysis (ABSA) task remains to be a long-standing challenge, which aims to extract the aspect term and then identify its sentiment orientation. In previous approaches, the explicit syntactic structure of a sentence, which reflects the syntax properties of natural language and hence is intuitively crucial for aspect term extraction and sentiment recognition, is typically neglected or insufficiently modeled.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3

Depth-Adaptive Graph Recurrent Network for Text Classification

1 code implementation29 Feb 2020 Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou

The Sentence-State LSTM (S-LSTM) is a powerful and high efficient graph recurrent network, which views words as nodes and performs layer-wise recurrent steps between them simultaneously.

General Classification Sentence +2

Original Semantics-Oriented Attention and Deep Fusion Network for Sentence Matching

no code implementations IJCNLP 2019 Mingtong Liu, Yu-Jie Zhang, Jinan Xu, Yufeng Chen

Unlike existing models, each attention layer of OSOA-DFN is oriented to the original semantic representation of another sentence, which captures the relevant information from a fixed matching target.

Natural Language Inference Paraphrase Identification +1

CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding

2 code implementations IJCNLP 2019 Yijin Liu, Fandong Meng, Jinchao Zhang, Jie zhou, Yufeng Chen, Jinan Xu

Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works.

Intent Detection slot-filling +2

A Novel Aspect-Guided Deep Transition Model for Aspect Based Sentiment Analysis

1 code implementation IJCNLP 2019 Yunlong Liang, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, Jie zhou

Aspect based sentiment analysis (ABSA) aims to identify the sentiment polarity towards the given aspect in a sentence, while previous models typically exploit an aspect-independent (weakly associative) encoder for sentence representation generation.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +1

GCDT: A Global Context Enhanced Deep Transition Architecture for Sequence Labeling

1 code implementation ACL 2019 Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, Jie zhou

Current state-of-the-art systems for sequence labeling are typically based on the family of Recurrent Neural Networks (RNNs).

Ranked #17 on Named Entity Recognition (NER) on CoNLL 2003 (English) (using extra training data)

Chunking NER +2

System Description of bjtu\_nlp Neural Machine Translation System

no code implementations WS 2016 Shaotong Li, Jinan Xu, Yufeng Chen, Yu-Jie Zhang

This paper presents our machine translation system that developed for the WAT2016 evalua-tion tasks of ja-en, ja-zh, en-ja, zh-ja, JPCja-en, JPCja-zh, JPCen-ja, JPCzh-ja.

Machine Translation Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.