Search Results for author: Junhui Li

Found 39 papers, 5 papers with code

HwTscSU’s Submissions on WAT 2022 Shared Task

no code implementations WAT 2022 Yilun Liu, Zhen Zhang, Shimin Tao, Junhui Li, Hao Yang

In this paper we describe our submission to the shared tasks of the 9th Workshop on Asian Translation (WAT 2022) on NICT–SAP under the team name ”HwTscSU”.

Domain Adaptation NMT +1

融合零指代识别的篇章级机器翻译(Context-aware Machine Translation Integrating Zero Pronoun Recognition)

no code implementations CCL 2021 Hao Wang, Junhui Li, ZhengXian Gong

“在汉语等其他有省略代词习惯的语言中, 通常会删掉可从上下文信息推断出的代词。尽管以Transformer为代表的的神经机器翻译模型取得了巨大的成功, 但这种省略现象依旧对神经机器翻译模型造成了很大的挑战。本文在Transformer基础上提出了一个融合零指代识别的翻译模型, 并引入篇章上下文来丰富指代信息。具体地, 该模型采用联合学习的框架, 在翻译模型基础上, 联合了一个分类任务, 即判别句子中省略代词在句子所表示的成分, 使得模型能够融合零指代信息辅助翻译。通过在中英对话数据集上的实验, 验证了本文提出方法的有效性, 与基准模型相比, 翻译性能提升了1. 48个BLEU值。”

Machine Translation

Joint Multi-modal Aspect-Sentiment Analysis with Auxiliary Cross-modal Relation Detection

1 code implementation EMNLP 2021 Xincheng Ju, Dong Zhang, Rong Xiao, Junhui Li, Shoushan Li, Min Zhang, Guodong Zhou

Therefore, in this paper, we are the first to jointly perform multi-modal ATE (MATE) and multi-modal ASC (MASC), and we propose a multi-modal joint learning approach with auxiliary cross-modal relation detection for multi-modal aspect-level sentiment analysis (MALSA).

Relation Sentiment Analysis +1

Encouraging Lexical Translation Consistency for Document-Level Neural Machine Translation

no code implementations EMNLP 2021 Xinglin Lyu, Junhui Li, ZhengXian Gong, Min Zhang

In this paper we apply “one translation per discourse” in NMT, and aim to encourage lexical translation consistency for document-level NMT.

Machine Translation NMT +1

基于序列到序列的中文AMR解析(Chinese AMR Parsing based on Sequence-to-Sequence Modeling)

no code implementations CCL 2021 Ziyi Huang, Junhui Li, ZhengXian Gong

“抽象语义表示(Abstract Meaning Representation, 简称AMR)是将给定的文本的语义特征抽象成一个单根的有向无环图。AMR语义解析则是根据输入的文本获取对应的AMR图。相比于英文AMR, 中文AMR的研究起步较晚, 造成针对中文的AMR语义解析相关研究较少。本文针对公开的中文AMR语料库CAMR1. 0, 采用序列到序列的方法进行中文AMR语义解析的相关研究。具体地, 首先基于Transformer模型实现一个适用于中文的序列到序列AMR语义解析系统;然后, 探索并比较了不同预训练模型在中文AMR语义解析中的应用。基于该语料, 本文中文AMR语义解析方法最优性能达到了70. 29的Smatch F1值。本文是第一次在该数据集上报告实验结果。”

AMR Parsing

层次化结构全局上下文增强的篇章级神经机器翻译(Hierarchical Global Context Augmented Document-level Neural Machine Translation)

no code implementations CCL 2020 Linqing Chen, Junhui Li, ZhengXian Gong

如何有效利用篇章上下文信息一直是篇章级神经机器翻译研究领域的一大挑战。本文提出利用来源于整个篇章的层次化全局上下文提高篇章级神经机器翻译性能。为了实现该目标, 本文模型分别获取当前句内单词与篇章内所有句子及单词之间的依赖关系, 结合不同层次的依赖关系以获取含有层次化篇章信息的全局上下文。最终源语言当前句子中的每个单词都能获取其独有的综合词和句级别依赖关系的上下文。为了充分利用平行句对语料在训练中的优势本文使用两步训练法, 在句子级语料训练模型的基础上使用含有篇章信息的语料进行二次训练以获得捕获全局上下文的能力。在若干基准语料数据集上的实验表明本文提出的模型与若干强基准模型相比取得了有意义的翻译质量提升。实验进一步表明, 结合层次化篇章信息的上下文比仅使用词级别上下文更具优势。除此之外, 本文尝试通过不同方式将全局上下文与翻译模型结合并观察其对模型性能的影响, 并初步探究篇章翻译中全局上下文在篇章中的分布情况。

Machine Translation

融合目标端句法的AMR-to-Text生成(AMR-to-Text Generation with Target Syntax)

no code implementations CCL 2020 Jie Zhu, Junhui Li

抽象语义表示到文本(AMR-to-Text)生成的任务是给定AMR图, 生成相同语义表示的文本。可以把此任务当作一个从源端AMR图到目标端句子的机器翻译任务。目前存在的一些方法都在探索如何更好的对图结构进行建模。然而, 它们都存在一个未限定的问题, 因为在生成阶段许多句法的决策并不受语义图的约束, 从而忽略了句子内部潜藏的句法信息。为了明确考虑这一不足, 该文提出一种直接而有效的方法, 显示的在AMR-to-Text生成的任务中融入句法信息, 并在Transformer和目前该任务最优性能的模型上进行了实验。实验结果表明, 在现存的两份标准英文数据集LDC2018E86和LDC2017T10上, 都取得了显著的提升, 达到了新的最高性能。

AMR-to-Text Generation Text Generation

DeMPT: Decoding-enhanced Multi-phase Prompt Tuning for Making LLMs Be Better Context-aware Translators

no code implementations23 Feb 2024 Xinglin Lyu, Junhui Li, Yanqing Zhao, Daimeng Wei, Shimin Tao, Hao Yang, Min Zhang

In this paper, we propose an alternative adaptation approach, named Decoding-enhanced Multi-phase Prompt Tuning (DeMPT), to make LLMs discriminately model and utilize the inter- and intra-sentence context and more effectively adapt LLMs to context-aware NMT.

Machine Translation NMT +1

Mean-Square Stability and Stabilizability for LTI and Stochastic Systems Connected in Feedback

no code implementations6 Feb 2024 Junhui Li, Jieying Lu, Weizhou Su

By proposing a key parameter called coefficient of frequency variation to characterize the correlation of the stochastic uncertainties, we present a necessary and sufficient condition of the mean-square stability for this MIMO stochastic feedback system.

Enhancing Document-level Translation of Large Language Model via Translation Mixed-instructions

no code implementations16 Jan 2024 Yachao Li, Junhui Li, Jing Jiang, Min Zhang

Our proposed translation mixed-instructions enable LLMs (Llama-2~7B and 13B) to maintain consistent translation performance from the sentence level to documents containing as many as 2048 tokens.

Language Modelling Large Language Model +3

DPATD: Dual-Phase Audio Transformer for Denoising

no code implementations30 Oct 2023 Junhui Li, Pu Wang, Jialu Li, Xinzhe Wang, Youshan Zhang

Recent high-performance transformer-based speech enhancement models demonstrate that time domain methods could achieve similar performance as time-frequency domain methods.

Denoising Speech Enhancement

Sparsity and Coefficient Permutation Based Two-Domain AMP for Image Block Compressed Sensing

no code implementations22 May 2023 Junhui Li, Xingsong Hou, Huake Wang, Shuhao Bi

In this paper, to overcome the issues and develop a high-performance LDAMP method for image block compressed sensing (BCS), we propose a novel sparsity and coefficient permutation-based AMP (SCP-AMP) method consisting of the block-based sampling and the two-domain reconstruction modules.

Deep Attention Denoising +1

P-Transformer: Towards Better Document-to-Document Neural Machine Translation

no code implementations12 Dec 2022 Yachao Li, Junhui Li, Jing Jiang, Shimin Tao, Hao Yang, Min Zhang

To alleviate this problem, we propose a position-aware Transformer (P-Transformer) to enhance both the absolute and relative position information in both self-attention and cross-attention.

Machine Translation NMT +3

Mean-square stability of linear systems over channels with random transmission delays

no code implementations27 Apr 2022 Jieying Lu, Junhui Li, Weizhou Su

A necessary and sufficient condition of mean-square (input-output) stability is studied for the networked feedback systems in terms of the input-output model and state-space model.

Enhancing Navigational Safety in Crowded Environments using Semantic-Deep-Reinforcement-Learning-based Navigation

1 code implementation23 Sep 2021 Linh Kästner, Junhui Li, Zhengcheng Shen, Jens Lambrecht

In this paper, we propose a semantic Deep-reinforcement-learning-based navigation approach that teaches object-specific safety rules by considering high-level obstacle information.

Navigate Object +2

Optimal output feedback control of a class of linear systems with quasi-colored control-dependent multiplicative noise

no code implementations3 Sep 2021 Junhui Li, Jieying Lu, Weizhou Su

This paper addresses the mean-square optimal control problem for \a class of discrete-time linear systems with a quasi-colored control-dependent multiplicative noise via output feedback.

Mean-Square Input-Output Stability and Stabilizability of a Networked Control System with Random Channel Induced Delays

no code implementations29 Aug 2021 Weizhou Su, Junhui Li, Jieying Lu

In this unreliable channel, the data transmission times, referred to as channel induced delays, are random values and the transmitted data could also be dropout with certain probability.

Improving AMR Parsing with Sequence-to-Sequence Pre-training

1 code implementation EMNLP 2020 Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, Guodong Zhou

In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance.

Ranked #15 on AMR Parsing on LDC2017T10 (using extra training data)

AMR Parsing Machine Translation +1

A Discrete CVAE for Response Generation on Short-Text Conversation

no code implementations IJCNLP 2019 Jun Gao, Wei Bi, Xiaojiang Liu, Junhui Li, Guodong Zhou, Shuming Shi

In this paper, we introduce a discrete latent variable with an explicit semantic meaning to improve the CVAE on short-text conversation.

Response Generation Short-Text Conversation +1

Modeling Graph Structure in Transformer for Better AMR-to-Text Generation

1 code implementation IJCNLP 2019 Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, Guodong Zhou

Recent studies on AMR-to-text generation often formalize the task as a sequence-to-sequence (seq2seq) learning problem by converting an Abstract Meaning Representation (AMR) graph into a word sequence.

AMR-to-Text Generation Text Generation

Generating Multiple Diverse Responses for Short-Text Conversation

no code implementations14 Nov 2018 Jun Gao, Wei Bi, Xiaojiang Liu, Junhui Li, Shuming Shi

In this paper, we propose a novel response generation model, which considers a set of responses jointly and generates multiple diverse responses simultaneously.

Informativeness Response Generation +1

Electricity consumption forecasting method based on MPSO-BP neural network model

no code implementations21 Oct 2018 Youshan Zhang, Liangdong Guo, Qi Li, Junhui Li

This paper deals with the problem of the electricity consumption forecasting method.

Adaptive Weighting for Neural Machine Translation

1 code implementation COLING 2018 Yachao Li, Junhui Li, Min Zhang

In the popular sequence to sequence (seq2seq) neural machine translation (NMT), there exist many weighted sum models (WSMs), each of which takes a set of input and generates one output.

Machine Translation NMT +1

Attention Focusing for Neural Machine Translation by Bridging Source and Target Embeddings

no code implementations ACL 2018 Shaohui Kuang, Junhui Li, António Branco, Weihua Luo, Deyi Xiong

In neural machine translation, a source sequence of words is encoded into a vector from which a target sequence is generated in the decoding phase.

Machine Translation Sentence +2

Learning When to Attend for Neural Machine Translation

no code implementations31 May 2017 Junhui Li, Muhua Zhu

In the past few years, attention mechanisms have become an indispensable component of end-to-end neural machine translation models.

Machine Translation Translation

Modeling Source Syntax for Neural Machine Translation

no code implementations ACL 2017 Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, Guodong Zhou

Even though a linguistics-free sequence to sequence model in neural machine translation (NMT) has certain capability of implicitly learning syntactic information of source sentences, this paper shows that source syntax can be explicitly incorporated into NMT effectively to provide further improvements.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.