Search Results for author: Ye Lin

Found 16 papers, 5 papers with code

The NiuTrans Machine Translation System for WMT18

no code implementations WS 2018 Qiang Wang, Bei Li, Jiqiang Liu, Bojian Jiang, Zheyang Zhang, Yinqiao Li, Ye Lin, Tong Xiao, Jingbo Zhu

This paper describes the submission of the NiuTrans neural machine translation system for the WMT 2018 Chinese ↔ English news translation tasks.

Machine Translation Translation

Unsupervised Many-to-Many Image-to-Image Translation Across Multiple Domains

no code implementations28 Nov 2019 Ye Lin, Keren Fu, Shenggui Ling, Cheng Peng

To improve the image quality, we propose an effective many-to-many mapping framework for unsupervised multi-domain image-to-image translation.

Translation Unsupervised Image-To-Image Translation

CSRN: Collaborative Sequential Recommendation Networks for News Retrieval

no code implementations7 Apr 2020 Bing Bai, Guanhua Zhang, Ye Lin, Hao Li, Kun Bai, Bo Luo

Recurrent Neural Network (RNN)-based sequential recommendation is a popular approach that utilizes users' recent browsing history to predict future items.

Collaborative Filtering News Retrieval +2

General-Purpose User Embeddings based on Mobile App Usage

1 code implementation27 May 2020 Junqi Zhang, Bing Bai, Ye Lin, Jian Liang, Kun Bai, Fei Wang

In this paper, we report our recent practice at Tencent for user modeling based on mobile app usage.

Feature Engineering

Towards Fully 8-bit Integer Inference for the Transformer Model

no code implementations17 Sep 2020 Ye Lin, Yanyang Li, Tengbo Liu, Tong Xiao, Tongran Liu, Jingbo Zhu

8-bit integer inference, as a promising direction in reducing both the latency and storage of deep neural networks, has made great progress recently.

Language Modelling Quantization +1

A Simple and Effective Approach to Robust Unsupervised Bilingual Dictionary Induction

no code implementations COLING 2020 Yanyang Li, Yingfeng Luo, Ye Lin, Quan Du, Huizhen Wang, ShuJian Huang, Tong Xiao, Jingbo Zhu

Our experiments show that this simple method does not hamper the performance of similar language pairs and achieves an accuracy of 13. 64~55. 53% between English and four distant languages, i. e., Chinese, Japanese, Vietnamese and Thai.

Dimensionality Reduction Self-Learning

An Efficient Transformer Decoder with Compressed Sub-layers

no code implementations3 Jan 2021 Yanyang Li, Ye Lin, Tong Xiao, Jingbo Zhu

The large attention-based encoder-decoder network (Transformer) has become prevailing recently due to its effectiveness.

Machine Translation Translation

The NiuTrans System for the WMT21 Efficiency Task

1 code implementation16 Sep 2021 Chenglong Wang, Chi Hu, Yongyu Mu, Zhongxiang Yan, Siming Wu, Minyi Hu, Hang Cao, Bei Li, Ye Lin, Tong Xiao, Jingbo Zhu

This paper describes the NiuTrans system for the WMT21 translation efficiency task (http://statmt. org/wmt21/efficiency-task. html).

Knowledge Distillation Translation

Multi-Path Transformer is Better: A Case Study on Neural Machine Translation

no code implementations10 May 2023 Ye Lin, Shuhan Zhou, Yanyang Li, Anxiang Ma, Tong Xiao, Jingbo Zhu

For years the model performance in machine learning obeyed a power-law relationship with the model size.

Machine Translation

MobileNMT: Enabling Translation in 15MB and 30ms

1 code implementation7 Jun 2023 Ye Lin, Xiaohui Wang, Zhexi Zhang, Mingxuan Wang, Tong Xiao, Jingbo Zhu

With the co-design of model and engine, compared with the existing system, we speed up 47. 0x and save 99. 5% of memory with only 11. 6% loss of BLEU.

Model Compression NMT +2

Understanding Parameter Sharing in Transformers

no code implementations15 Jun 2023 Ye Lin, Mingxuan Wang, Zhexi Zhang, Xiaohui Wang, Tong Xiao, Jingbo Zhu

Inspired by this, we tune the training hyperparameters related to model convergence in a targeted manner.

Machine Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.