1 code implementation • EMNLP 2021 • Jiali Zeng, Shuangzhi Wu, Yongjing Yin, Yufan Jiang, Mu Li
Across an extensive set of experiments on 10 machine translation tasks, we find that RAN models are competitive and outperform their Transformer counterpart in certain scenarios, with fewer parameters and inference time.
1 code implementation • Findings (ACL) 2022 • Yafu Li, Yongjing Yin, Jing Li, Yue Zhang
Neural machine translation (NMT) has obtained significant performance improvement over the recent years.
no code implementations • 17 Sep 2024 • Yongjing Yin, Junran Ding, Kai Song, Yue Zhang
In this paper, we introduce Semformer, a novel method of training a Transformer language model that explicitly models the semantic planning of response.
1 code implementation • 3 Jun 2024 • Yongjing Yin, Jiali Zeng, Yafu Li, Fandong Meng, Yue Zhang
The fine-tuning of open-source large language models (LLMs) for machine translation has recently received considerable attention, marking a shift towards data-centric research from traditional neural machine translation.
1 code implementation • 21 May 2024 • Yafu Li, Huajian Zhang, Jianhao Yan, Yongjing Yin, Yue Zhang
Recent advances have made non-autoregressive (NAT) translation comparable to autoregressive methods (AT).
3 code implementations • 27 Dec 2023 • Zijie Yang, Yongjing Yin, Chaojun Kong, Tiange Chi, Wufan Tao, Yue Zhang, Tian Xu
Natural Medicinal Materials (NMMs) have a long history of global clinical applications and a wealth of records and knowledge.
1 code implementation • 6 Nov 2023 • Jiali Zeng, Fandong Meng, Yongjing Yin, Jie zhou
Contemporary translation engines based on the encoder-decoder framework have made significant strides in development.
1 code implementation • 10 Jul 2023 • Jiali Zeng, Fandong Meng, Yongjing Yin, Jie zhou
Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in various tasks with instruction tuning.
1 code implementation • 20 Jun 2023 • Yafu Li, Leyang Cui, Jianhao Yan, Yongjing Yin, Wei Bi, Shuming Shi, Yue Zhang
Most existing text generation models follow the sequence-to-sequence paradigm.
no code implementations • 13 Jun 2023 • Jiali Zeng, Yufan Jiang, Yongjing Yin, Yi Jing, Fandong Meng, Binghuai Lin, Yunbo Cao, Jie zhou
Multilingual pre-trained language models have demonstrated impressive (zero-shot) cross-lingual transfer abilities, however, their performance is hindered when the target language has distant typology from source languages or when pre-training data is limited in size.
no code implementations • 15 Nov 2022 • Jiali Zeng, Yufan Jiang, Yongjing Yin, Xu Wang, Binghuai Lin, Yunbo Cao
We present DualNER, a simple and effective framework to make full use of both annotated source language corpus and unlabeled target language text for zero-shot cross-lingual named entity recognition (NER).
no code implementations • 7 Nov 2022 • Jiali Zeng, Yongjing Yin, Yufan Jiang, Shuangzhi Wu, Yunbo Cao
Specifically, with the help of prompts, we construct virtual semantic prototypes to each instance, and derive negative prototypes by using the negative form of the prompts.
1 code implementation • 20 Oct 2022 • Yafu Li, Leyang Cui, Yongjing Yin, Yue Zhang
Despite low latency, non-autoregressive machine translation (NAT) suffers severe performance deterioration due to the naive independence assumption.
no code implementations • COLING 2022 • Yongjing Yin, Yafu Li, Fandong Meng, Jie zhou, Yue Zhang
Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks.
1 code implementation • Findings (ACL) 2022 • Jiali Zeng, Yufan Jiang, Shuangzhi Wu, Yongjing Yin, Mu Li
Pretrained language models (PLMs) trained on large-scale unlabeled corpus are typically fine-tuned on task-specific downstream datasets, which have produced state-of-the-art results on various NLP tasks.
1 code implementation • ACL 2021 • Yafu Li, Yongjing Yin, Yulong Chen, Yue Zhang
Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks such as WMT.
1 code implementation • 4 Sep 2020 • Huan Lin, Fandong Meng, Jinsong Su, Yongjing Yin, Zhengyuan Yang, Yubin Ge, Jie zhou, Jiebo Luo
Particularly, we represent the input image with global and regional visual features, we introduce two parallel DCCNs to model multimodal context vectors with visual features at different granularities.
Ranked #3 on
Multimodal Machine Translation
on Multi30K
1 code implementation • ACL 2020 • Yongjing Yin, Fandong Meng, Jinsong Su, Chulun Zhou, Zhengyuan Yang, Jie zhou, Jiebo Luo
Multi-modal neural machine translation (NMT) aims to translate source sentences into a target language paired with images.
1 code implementation • 16 Dec 2019 • Yongjing Yin, Linfeng Song, Jinsong Su, Jiali Zeng, Chulun Zhou, Jiebo Luo
Sentence ordering is to restore the original paragraph from a set of sentences.
no code implementations • IJCNLP 2019 • Jiali Zeng, Yang Liu, Jinsong Su, Yubin Ge, Yaojie Lu, Yongjing Yin, Jiebo Luo
Previous studies on the domain adaptation for neural machine translation (NMT) mainly focus on the one-pass transferring out-of-domain translation knowledge to in-domain NMT model.
1 code implementation • EMNLP 2018 • Jiali Zeng, Jinsong Su, Huating Wen, Yang Liu, Jun Xie, Yongjing Yin, Jianqiang Zhao
Based on this intuition, in this paper, we devote to distinguishing and exploiting word-level domain contexts for multi-domain NMT.