Search Results for author: Bei Li

Found 22 papers, 9 papers with code

Learning Multiscale Transformer Models for Sequence Generation

1 code implementation19 Jun 2022 Bei Li, Tong Zheng, Yi Jing, Chengbo Jiao, Tong Xiao, Jingbo Zhu

In this work, we define those scales in different linguistic units, including sub-words, words and phrases.

On Vision Features in Multimodal Machine Translation

1 code implementation ACL 2022 Bei Li, Chuanhao Lv, Zefan Zhou, Tao Zhou, Tong Xiao, Anxiang Ma, Jingbo Zhu

Previous work on multimodal machine translation (MMT) has focused on the way of incorporating vision features into translation but little attention is on the quality of vision models.

Image Captioning Multimodal Machine Translation +3

The NiuTrans System for the WMT21 Efficiency Task

1 code implementation16 Sep 2021 Chenglong Wang, Chi Hu, Yongyu Mu, Zhongxiang Yan, Siming Wu, Minyi Hu, Hang Cao, Bei Li, Ye Lin, Tong Xiao, Jingbo Zhu

This paper describes the NiuTrans system for the WMT21 translation efficiency task (http://statmt. org/wmt21/efficiency-task. html).

Knowledge Distillation Translation

High-resolution chirplet transform: from parameters analysis to parameters combination

no code implementations2 Aug 2021 Xiangxiang Zhu, Bei Li, Kunde Yang, Zhuosheng Zhang, Wenting Li

The standard chirplet transform (CT) with a chirp-modulated Gaussian window provides a valuable tool for analyzing linear chirp signals.

Learning Light-Weight Translation Models from Deep Transformer

1 code implementation27 Dec 2020 Bei Li, Ziyang Wang, Hui Liu, Quan Du, Tong Xiao, Chunliang Zhang, Jingbo Zhu

We proposed a novel group-permutation based knowledge distillation approach to compressing the deep Transformer model into a shallow model.

Knowledge Distillation Machine Translation +1

Shallow-to-Deep Training for Neural Machine Translation

1 code implementation EMNLP 2020 Bei Li, Ziyang Wang, Hui Liu, Yufan Jiang, Quan Du, Tong Xiao, Huizhen Wang, Jingbo Zhu

We find that stacking layers is helpful in improving the representation ability of NMT models and adjacent layers perform similarly.

Machine Translation Translation

A multilayer interstitial fluid flow along vascular adventitia

no code implementations23 Sep 2020 Hongyi Li, You Lv, Xiaoliang Chen, Bei Li, Qi Hua, Fusui Ji, Yajun Yin, Hua Li

In real-time observations, the calculated velocity of a continuous ISF flow along fibers of a PACT pathway was 3. 6-15. 6 mm/sec.

Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine Translation

1 code implementation ACL 2020 Bei Li, Hui Liu, Ziyang Wang, Yufan Jiang, Tong Xiao, Jingbo Zhu, Tongran Liu, Changliang Li

In encoder-decoder neural models, multiple encoders are in general used to represent the contextual information in addition to the individual sentence.

Machine Translation Translation

The NiuTrans Machine Translation System for WMT18

no code implementations WS 2018 Qiang Wang, Bei Li, Jiqiang Liu, Bojian Jiang, Zheyang Zhang, Yinqiao Li, Ye Lin, Tong Xiao, Jingbo Zhu

This paper describes the submission of the NiuTrans neural machine translation system for the WMT 2018 Chinese ↔ English news translation tasks.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.