no code implementations • EMNLP 2020 • Dong Zhang, Xincheng Ju, Junhui Li, Shoushan Li, Qiaoming Zhu, Guodong Zhou
In this paper, we focus on multi-label emotion detection in a multi-modal scenario.
no code implementations • WAT 2022 • Yilun Liu, Zhen Zhang, Shimin Tao, Junhui Li, Hao Yang
In this paper we describe our submission to the shared tasks of the 9th Workshop on Asian Translation (WAT 2022) on NICT–SAP under the team name ”HwTscSU”.
1 code implementation • EMNLP 2021 • Xincheng Ju, Dong Zhang, Rong Xiao, Junhui Li, Shoushan Li, Min Zhang, Guodong Zhou
Therefore, in this paper, we are the first to jointly perform multi-modal ATE (MATE) and multi-modal ASC (MASC), and we propose a multi-modal joint learning approach with auxiliary cross-modal relation detection for multi-modal aspect-level sentiment analysis (MALSA).
no code implementations • EMNLP 2021 • Xinglin Lyu, Junhui Li, ZhengXian Gong, Min Zhang
In this paper we apply “one translation per discourse” in NMT, and aim to encourage lexical translation consistency for document-level NMT.
no code implementations • CCL 2021 • Hao Wang, Junhui Li, ZhengXian Gong
“在汉语等其他有省略代词习惯的语言中, 通常会删掉可从上下文信息推断出的代词。尽管以Transformer为代表的的神经机器翻译模型取得了巨大的成功, 但这种省略现象依旧对神经机器翻译模型造成了很大的挑战。本文在Transformer基础上提出了一个融合零指代识别的翻译模型, 并引入篇章上下文来丰富指代信息。具体地, 该模型采用联合学习的框架, 在翻译模型基础上, 联合了一个分类任务, 即判别句子中省略代词在句子所表示的成分, 使得模型能够融合零指代信息辅助翻译。通过在中英对话数据集上的实验, 验证了本文提出方法的有效性, 与基准模型相比, 翻译性能提升了1. 48个BLEU值。”
no code implementations • CCL 2021 • Ziyi Huang, Junhui Li, ZhengXian Gong
“抽象语义表示(Abstract Meaning Representation, 简称AMR)是将给定的文本的语义特征抽象成一个单根的有向无环图。AMR语义解析则是根据输入的文本获取对应的AMR图。相比于英文AMR, 中文AMR的研究起步较晚, 造成针对中文的AMR语义解析相关研究较少。本文针对公开的中文AMR语料库CAMR1. 0, 采用序列到序列的方法进行中文AMR语义解析的相关研究。具体地, 首先基于Transformer模型实现一个适用于中文的序列到序列AMR语义解析系统;然后, 探索并比较了不同预训练模型在中文AMR语义解析中的应用。基于该语料, 本文中文AMR语义解析方法最优性能达到了70. 29的Smatch F1值。本文是第一次在该数据集上报告实验结果。”
no code implementations • CCL 2020 • Linqing Chen, Junhui Li, ZhengXian Gong
如何有效利用篇章上下文信息一直是篇章级神经机器翻译研究领域的一大挑战。本文提出利用来源于整个篇章的层次化全局上下文提高篇章级神经机器翻译性能。为了实现该目标, 本文模型分别获取当前句内单词与篇章内所有句子及单词之间的依赖关系, 结合不同层次的依赖关系以获取含有层次化篇章信息的全局上下文。最终源语言当前句子中的每个单词都能获取其独有的综合词和句级别依赖关系的上下文。为了充分利用平行句对语料在训练中的优势本文使用两步训练法, 在句子级语料训练模型的基础上使用含有篇章信息的语料进行二次训练以获得捕获全局上下文的能力。在若干基准语料数据集上的实验表明本文提出的模型与若干强基准模型相比取得了有意义的翻译质量提升。实验进一步表明, 结合层次化篇章信息的上下文比仅使用词级别上下文更具优势。除此之外, 本文尝试通过不同方式将全局上下文与翻译模型结合并观察其对模型性能的影响, 并初步探究篇章翻译中全局上下文在篇章中的分布情况。
no code implementations • CCL 2020 • Jie Zhu, Junhui Li
抽象语义表示到文本(AMR-to-Text)生成的任务是给定AMR图, 生成相同语义表示的文本。可以把此任务当作一个从源端AMR图到目标端句子的机器翻译任务。目前存在的一些方法都在探索如何更好的对图结构进行建模。然而, 它们都存在一个未限定的问题, 因为在生成阶段许多句法的决策并不受语义图的约束, 从而忽略了句子内部潜藏的句法信息。为了明确考虑这一不足, 该文提出一种直接而有效的方法, 显示的在AMR-to-Text生成的任务中融入句法信息, 并在Transformer和目前该任务最优性能的模型上进行了实验。实验结果表明, 在现存的两份标准英文数据集LDC2018E86和LDC2017T10上, 都取得了显著的提升, 达到了新的最高性能。
no code implementations • 7 Sep 2024 • YiHeng Wu, Junhui Li, Muhua Zhu
Previous approaches to the task of implicit discourse relation recognition (IDRR) generally view it as a classification task.
no code implementations • 17 Jul 2024 • Junhui Li, Xingsong Hou
In this paper, we propose a codebook-based RS image compression (Code-RSIC) method with a generated discrete codebook, which is deployed at the decoding end of a compression algorithm to provide inter-image similarity prior.
no code implementations • 13 Jun 2024 • Pu Wang, Junhui Li, Jialu Li, Liangdong Guo, Youshan Zhang
To overcome these challenges, we propose a DiffGMM model, a denoising model based on the diffusion and Gaussian mixture models.
1 code implementation • 6 Jun 2024 • Junhui Li, Jutao Li, Xingsong Hou, Huake Wang
However, few algorithms leverage the compression distortion prior from existing compression algorithms to improve RD performance.
1 code implementation • 17 May 2024 • Jie Zhu, Junhui Li, Yalong Wen, Lifan Guo
The datasets and scripts associated with CFLUE are openly accessible at https://github. com/aliyun/cflue.
no code implementations • 17 May 2024 • Junhui Li, Xingsong Hou
Decoding remote sensing images to achieve high perceptual quality, particularly at low bitrates, remains a significant challenge.
1 code implementation • 24 Apr 2024 • Qinxin Wang, Jiayuan Huang, Junhui Li, Jiaming Liu
In this paper, We present a novel method to predict the survival time by better clustering the survival data and combine primitive distributions.
1 code implementation • 23 Feb 2024 • Xinglin Lyu, Junhui Li, Yanqing Zhao, Daimeng Wei, Shimin Tao, Hao Yang, Min Zhang
In this paper, we propose an alternative adaptation approach, named Decoding-enhanced Multi-phase Prompt Tuning (DeMPT), to make LLMs discriminately model and utilize the inter- and intra-sentence context and more effectively adapt LLMs to context-aware NMT.
no code implementations • 6 Feb 2024 • Junhui Li, Jieying Lu, Weizhou Su
By proposing a key parameter called coefficient of frequency variation to characterize the correlation of the stochastic uncertainties, we present a necessary and sufficient condition of the mean-square stability for this MIMO stochastic feedback system.
no code implementations • 16 Jan 2024 • Yachao Li, Junhui Li, Jing Jiang, Min Zhang
Our proposed translation mixed-instructions enable LLMs (Llama-2~7B and 13B) to maintain consistent translation performance from the sentence level to documents containing as many as 2048 tokens.
no code implementations • 30 Oct 2023 • Junhui Li, Pu Wang, Jialu Li, Xinzhe Wang, Youshan Zhang
Recent high-performance transformer-based speech enhancement models demonstrate that time domain methods could achieve similar performance as time-frequency domain methods.
no code implementations • 18 Jul 2023 • Huake Wang, Xiaoyang Yan, Xingsong Hou, Junhui Li, Yujie Dun, Kaibing Zhang
Low-light image enhancement strives to improve the contrast, adjust the visibility, and restore the distortion in color and texture.
no code implementations • 15 Jul 2023 • Lei Pan, Wuyang Luan, Yuan Zheng, Qiang Fu, Junhui Li
The model achieves a more comprehensive feature representation by the features which connect global and local features.
no code implementations • 22 May 2023 • Junhui Li, Xingsong Hou, Huake Wang, Shuhao Bi
In this paper, to overcome the issues and develop a high-performance LDAMP method for image block compressed sensing (BCS), we propose a novel sparsity and coefficient permutation-based AMP (SCP-AMP) method consisting of the block-based sampling and the two-domain reconstruction modules.
1 code implementation • 12 Dec 2022 • Yachao Li, Junhui Li, Jing Jiang, Shimin Tao, Hao Yang, Min Zhang
To alleviate this problem, we propose a position-aware Transformer (P-Transformer) to enhance both the absolute and relative position information in both self-attention and cross-attention.
no code implementations • 27 Apr 2022 • Jieying Lu, Junhui Li, Weizhou Su
A necessary and sufficient condition of mean-square (input-output) stability is studied for the networked feedback systems in terms of the input-output model and state-space model.
1 code implementation • 23 Sep 2021 • Linh Kästner, Junhui Li, Zhengcheng Shen, Jens Lambrecht
In this paper, we propose a semantic Deep-reinforcement-learning-based navigation approach that teaches object-specific safety rules by considering high-level obstacle information.
no code implementations • 3 Sep 2021 • Junhui Li, Jieying Lu, Weizhou Su
This paper addresses the mean-square optimal control problem for \a class of discrete-time linear systems with a quasi-colored control-dependent multiplicative noise via output feedback.
no code implementations • 29 Aug 2021 • Weizhou Su, Junhui Li, Jieying Lu
In this unreliable channel, the data transmission times, referred to as channel induced delays, are random values and the transmitted data could also be dropout with certain probability.
no code implementations • ACL 2021 • Linqing Chen, Junhui Li, ZhengXian Gong, Boxing Chen, Weihua Luo, Min Zhang, Guodong Zhou
To this end, we propose two pre-training tasks.
no code implementations • ACL 2021 • Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, Guodong Zhou
We hope that knowledge gained while learning for English AMR parsing and text generation can be transferred to the counterparts of other languages.
1 code implementation • EMNLP 2020 • Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, Guodong Zhou
In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance.
Ranked #15 on AMR Parsing on LDC2017T10 (using extra training data)
no code implementations • IJCNLP 2019 • Jun Gao, Wei Bi, Xiaojiang Liu, Junhui Li, Guodong Zhou, Shuming Shi
In this paper, we introduce a discrete latent variable with an explicit semantic meaning to improve the CVAE on short-text conversation.
1 code implementation • IJCNLP 2019 • Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, Guodong Zhou
Recent studies on AMR-to-text generation often formalize the task as a sequence-to-sequence (seq2seq) learning problem by converting an Abstract Meaning Representation (AMR) graph into a word sequence.
no code implementations • 14 Nov 2018 • Jun Gao, Wei Bi, Xiaojiang Liu, Junhui Li, Shuming Shi
In this paper, we propose a novel response generation model, which considers a set of responses jointly and generates multiple diverse responses simultaneously.
no code implementations • 21 Oct 2018 • Youshan Zhang, Liangdong Guo, Qi Li, Junhui Li
This paper deals with the problem of the electricity consumption forecasting method.
1 code implementation • COLING 2018 • Yachao Li, Junhui Li, Min Zhang
In the popular sequence to sequence (seq2seq) neural machine translation (NMT), there exist many weighted sum models (WSMs), each of which takes a set of input and generates one output.
no code implementations • ACL 2018 • Shaohui Kuang, Junhui Li, António Branco, Weihua Luo, Deyi Xiong
In neural machine translation, a source sequence of words is encoded into a vector from which a target sequence is generated in the decoding phase.
no code implementations • 31 May 2017 • Junhui Li, Muhua Zhu
In the past few years, attention mechanisms have become an indispensable component of end-to-end neural machine translation models.
no code implementations • ACL 2017 • Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, Guodong Zhou
Even though a linguistics-free sequence to sequence model in neural machine translation (NMT) has certain capability of implicitly learning syntactic information of source sentences, this paper shows that source syntax can be explicitly incorporated into NMT effectively to provide further improvements.