no code implementations • WMT (EMNLP) 2021 • Chenglong Wang, Chi Hu, Yongyu Mu, Zhongxiang Yan, Siming Wu, Yimin Hu, Hang Cao, Bei Li, Ye Lin, Tong Xiao, Jingbo Zhu
This paper describes the NiuTrans system for the WMT21 translation efficiency task.
no code implementations • 6 Jan 2025 • Ye Lin, Jiwei Jia, Young Ju Lee, Ran Zhang
Greedy algorithms, particularly the orthogonal greedy algorithm (OGA), have proven effective in training shallow neural networks for fitting functions and solving partial differential equations (PDEs).
no code implementations • 4 Jul 2024 • Ye Lin, Young Ju Lee, Jiwei Jia
Second, the asymptotic smoothness property of Green's function is used to leverage the Multi-Level Multi-Integration (MLMI) algorithm for both the training and inference stages.
no code implementations • 15 Jun 2023 • Ye Lin, Mingxuan Wang, Zhexi Zhang, Xiaohui Wang, Tong Xiao, Jingbo Zhu
Inspired by this, we tune the training hyperparameters related to model convergence in a targeted manner.
1 code implementation • 7 Jun 2023 • Ye Lin, Xiaohui Wang, Zhexi Zhang, Mingxuan Wang, Tong Xiao, Jingbo Zhu
With the co-design of model and engine, compared with the existing system, we speed up 47. 0x and save 99. 5% of memory with only 11. 6% loss of BLEU.
no code implementations • 10 May 2023 • Ye Lin, Shuhan Zhou, Yanyang Li, Anxiang Ma, Tong Xiao, Jingbo Zhu
For years the model performance in machine learning obeyed a power-law relationship with the model size.
2 code implementations • WS 2020 • Chi Hu, Bei Li, Ye Lin, Yinqiao Li, Yanyang Li, Chenglong Wang, Tong Xiao, Jingbo Zhu
This paper describes the submissions of the NiuTrans Team to the WNGT 2020 Efficiency Shared Task.
1 code implementation • 16 Sep 2021 • Chenglong Wang, Chi Hu, Yongyu Mu, Zhongxiang Yan, Siming Wu, Minyi Hu, Hang Cao, Bei Li, Ye Lin, Tong Xiao, Jingbo Zhu
This paper describes the NiuTrans system for the WMT21 translation efficiency task (http://statmt. org/wmt21/efficiency-task. html).
1 code implementation • Findings (EMNLP) 2021 • Ye Lin, Yanyang Li, Tong Xiao, Jingbo Zhu
Improving Transformer efficiency has become increasingly attractive recently.
no code implementations • 3 Jan 2021 • Yanyang Li, Ye Lin, Tong Xiao, Jingbo Zhu
The large attention-based encoder-decoder network (Transformer) has become prevailing recently due to its effectiveness.
no code implementations • COLING 2020 • Yanyang Li, Yingfeng Luo, Ye Lin, Quan Du, Huizhen Wang, ShuJian Huang, Tong Xiao, Jingbo Zhu
Our experiments show that this simple method does not hamper the performance of similar language pairs and achieves an accuracy of 13. 64~55. 53% between English and four distant languages, i. e., Chinese, Japanese, Vietnamese and Thai.
no code implementations • ACL 2021 • Ye Lin, Yanyang Li, Ziyang Wang, Bei Li, Quan Du, Tong Xiao, Jingbo Zhu
Inspired by this, we investigate methods of model acceleration and compression in another line of research.
no code implementations • 17 Sep 2020 • Ye Lin, Yanyang Li, Tengbo Liu, Tong Xiao, Tongran Liu, Jingbo Zhu
8-bit integer inference, as a promising direction in reducing both the latency and storage of deep neural networks, has made great progress recently.
1 code implementation • 27 May 2020 • Junqi Zhang, Bing Bai, Ye Lin, Jian Liang, Kun Bai, Fei Wang
In this paper, we report our recent practice at Tencent for user modeling based on mobile app usage.
no code implementations • 7 Apr 2020 • Bing Bai, Guanhua Zhang, Ye Lin, Hao Li, Kun Bai, Bo Luo
Recurrent Neural Network (RNN)-based sequential recommendation is a popular approach that utilizes users' recent browsing history to predict future items.
no code implementations • 28 Nov 2019 • Ye Lin, Keren Fu, Shenggui Ling, Cheng Peng
To improve the image quality, we propose an effective many-to-many mapping framework for unsupervised multi-domain image-to-image translation.
no code implementations • WS 2019 • Bei Li, Yinqiao Li, Chen Xu, Ye Lin, Jiqiang Liu, Hui Liu, Ziyang Wang, Yuhao Zhang, Nuo Xu, Zeyang Wang, Kai Feng, Hexuan Chen, Tengbo Liu, Yanyang Li, Qiang Wang, Tong Xiao, Jingbo Zhu
We participated in 13 translation directions, including 11 supervised tasks, namely EN↔{ZH, DE, RU, KK, LT}, GU→EN and the unsupervised DE↔CS sub-track.
no code implementations • WS 2018 • Qiang Wang, Bei Li, Jiqiang Liu, Bojian Jiang, Zheyang Zhang, Yinqiao Li, Ye Lin, Tong Xiao, Jingbo Zhu
This paper describes the submission of the NiuTrans neural machine translation system for the WMT 2018 Chinese ↔ English news translation tasks.