Search Results for author: Yongjing Yin

Found 18 papers, 12 papers with code

Recurrent Attention for Neural Machine Translation

1 code implementation EMNLP 2021 Jiali Zeng, Shuangzhi Wu, Yongjing Yin, Yufan Jiang, Mu Li

Across an extensive set of experiments on 10 machine translation tasks, we find that RAN models are competitive and outperform their Transformer counterpart in certain scenarios, with fewer parameters and inference time.

Machine Translation NMT +1

Prompt-Driven Neural Machine Translation

1 code implementation Findings (ACL) 2022 Yafu Li, Yongjing Yin, Jing Li, Yue Zhang

Neural machine translation (NMT) has obtained significant performance improvement over the recent years.

Machine Translation NMT +1

AI-driven platform for systematic nomenclature and intelligent knowledge acquisition of natural medicinal materials

3 code implementations27 Dec 2023 Zijie Yang, Yongjing Yin, Chaojun Kong, Tiange Chi, Wufan Tao, Yue Zhang, Tian Xu

Natural Medicinal Materials (NMMs) have a long history of global clinical applications, accompanied by extensive informational records.

Machine Translation Management

Improving Machine Translation with Large Language Models: A Preliminary Study with Cooperative Decoding

no code implementations6 Nov 2023 Jiali Zeng, Fandong Meng, Yongjing Yin, Jie zhou

Contemporary translation engines built upon the encoder-decoder framework have reached a high level of development, while the emergence of Large Language Models (LLMs) has disrupted their position by offering the potential for achieving superior translation quality.

Machine Translation NMT +1

TIM: Teaching Large Language Models to Translate with Comparison

1 code implementation10 Jul 2023 Jiali Zeng, Fandong Meng, Yongjing Yin, Jie zhou

Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in various tasks with instruction tuning.

Translation

Soft Language Clustering for Multilingual Model Pre-training

no code implementations13 Jun 2023 Jiali Zeng, Yufan Jiang, Yongjing Yin, Yi Jing, Fandong Meng, Binghuai Lin, Yunbo Cao, Jie zhou

Multilingual pre-trained language models have demonstrated impressive (zero-shot) cross-lingual transfer abilities, however, their performance is hindered when the target language has distant typology from source languages or when pre-training data is limited in size.

Clustering Question Answering +5

DualNER: A Dual-Teaching framework for Zero-shot Cross-lingual Named Entity Recognition

no code implementations15 Nov 2022 Jiali Zeng, Yufan Jiang, Yongjing Yin, Xu Wang, Binghuai Lin, Yunbo Cao

We present DualNER, a simple and effective framework to make full use of both annotated source language corpus and unlabeled target language text for zero-shot cross-lingual named entity recognition (NER).

named-entity-recognition Named Entity Recognition +1

Contrastive Learning with Prompt-derived Virtual Semantic Prototypes for Unsupervised Sentence Embedding

no code implementations7 Nov 2022 Jiali Zeng, Yongjing Yin, Yufan Jiang, Shuangzhi Wu, Yunbo Cao

Specifically, with the help of prompts, we construct virtual semantic prototypes to each instance, and derive negative prototypes by using the negative form of the prompts.

Clustering Contrastive Learning +5

Multi-Granularity Optimization for Non-Autoregressive Translation

1 code implementation20 Oct 2022 Yafu Li, Leyang Cui, Yongjing Yin, Yue Zhang

Despite low latency, non-autoregressive machine translation (NAT) suffers severe performance deterioration due to the naive independence assumption.

Machine Translation Translation

Task-guided Disentangled Tuning for Pretrained Language Models

1 code implementation Findings (ACL) 2022 Jiali Zeng, Yufan Jiang, Shuangzhi Wu, Yongjing Yin, Mu Li

Pretrained language models (PLMs) trained on large-scale unlabeled corpus are typically fine-tuned on task-specific downstream datasets, which have produced state-of-the-art results on various NLP tasks.

On Compositional Generalization of Neural Machine Translation

1 code implementation ACL 2021 Yafu Li, Yongjing Yin, Yulong Chen, Yue Zhang

Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks such as WMT.

Domain Generalization Machine Translation +3

Dynamic Context-guided Capsule Network for Multimodal Machine Translation

1 code implementation4 Sep 2020 Huan Lin, Fandong Meng, Jinsong Su, Yongjing Yin, Zhengyuan Yang, Yubin Ge, Jie zhou, Jiebo Luo

Particularly, we represent the input image with global and regional visual features, we introduce two parallel DCCNs to model multimodal context vectors with visual features at different granularities.

Multimodal Machine Translation Representation Learning +1

Iterative Dual Domain Adaptation for Neural Machine Translation

no code implementations IJCNLP 2019 Jiali Zeng, Yang Liu, Jinsong Su, Yubin Ge, Yaojie Lu, Yongjing Yin, Jiebo Luo

Previous studies on the domain adaptation for neural machine translation (NMT) mainly focus on the one-pass transferring out-of-domain translation knowledge to in-domain NMT model.

Domain Adaptation Knowledge Distillation +4

Graph-based Neural Sentence Ordering

1 code implementation16 Dec 2019 Yongjing Yin, Linfeng Song, Jinsong Su, Jiali Zeng, Chulun Zhou, Jiebo Luo

Sentence ordering is to restore the original paragraph from a set of sentences.

Sentence Sentence Ordering

Cannot find the paper you are looking for? You can Submit a new open access paper.