no code implementations • WMT (EMNLP) 2021 • Han Yang, Bojie Hu, Wanying Xie, Ambyera Han, Pan Liu, Jinan Xu, Qi Ju
This paper describes TenTrans’ submission to WMT21 Multilingual Low-Resource Translation shared task for the Romance language pairs.
no code implementations • WMT (EMNLP) 2021 • Kaixin Wu, Bojie Hu, Qi Ju
The paper describes the TenTrans’s submissions to the WMT 2021 Efficiency Shared Task.
no code implementations • WMT (EMNLP) 2021 • Wanying Xie, Bojie Hu, Han Yang, Dong Yu, Qi Ju
This paper describes TenTrans large-scale multilingual machine translation system for WMT 2021.
no code implementations • EMNLP 2020 • Zhen Yang, Bojie Hu, Ambyera Han, Shen Huang, Qi Ju
Unlike traditional pre-training method which randomly masks some fragments of the input sentence, the proposed CSP randomly replaces some words in the source sentence with their translation words in the target language.
1 code implementation • 5 Sep 2024 • Qi Ju, Falin Hei, Zhemei Fang, Yunfeng Luo
Reinforcement Learning (RL) is highly dependent on the meticulous design of the reward function.
no code implementations • 19 May 2023 • Xingyu Bai, Taiqiang Wu, Han Guo, Zhe Zhao, Xuefeng Yang, Jiayi Li, Weijie Liu, Qi Ju, Weigang Guo, Yujiu Yang
Event Extraction (EE), aiming to identify and classify event triggers and arguments from event mentions, has benefited from pre-trained language models (PLMs).
1 code implementation • Findings (NAACL) 2022 • Kunbo Ding, Weijie Liu, Yuejian Fang, Zhe Zhao, Qi Ju, Xuefeng Yang
Previous studies have proved that cross-lingual knowledge distillation can significantly improve the performance of pre-trained models for cross-lingual similarity matching tasks.
no code implementations • 24 Feb 2022 • YuAn Wang, Wei Zhuo, Yucong Li, Zhi Wang, Qi Ju, Wenwu Zhu
To solve this problem, we proposed a bootstrapped training scheme for semantic segmentation, which fully leveraged the global semantic knowledge for self-supervision with our proposed PGG strategy and CAE module.
Ranked #16 on Unsupervised Semantic Segmentation on COCO-Stuff-27
1 code implementation • 14 Feb 2022 • Weijie Liu, Tao Zhu, Weiquan Mao, Zhe Zhao, Weigang Guo, Xuefeng Yang, Qi Ju
In this paper, we pay attention to the issue which is usually overlooked, i. e., \textit{similarity should be determined from different perspectives}.
no code implementations • 8 Oct 2021 • Ke Zhang, Sihong Chen, Qi Ju, Yong Jiang, Yucong Li, Xin He
The graph network that is established with patches as the nodes can maximize the mutual learning of similar objects.
no code implementations • 7 Jun 2021 • Bowen Zhao, Chen Chen, Qi Ju, Shutao Xia
Training on class-imbalanced data usually results in biased models that tend to predict samples into the majority classes, which is a common and notorious problem.
no code implementations • ACL 2021 • Chen Xu, Bojie Hu, Yanyang Li, Yuhao Zhang, Shen Huang, Qi Ju, Tong Xiao, Jingbo Zhu
To our knowledge, we are the first to develop an end-to-end ST system that achieves comparable or even better BLEU performance than the cascaded ST counterpart when large-scale ASR and MT data is available.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • COLING 2020 • Chen Xu, Bojie Hu, Yufan Jiang, Kai Feng, Zeyang Wang, Shen Huang, Qi Ju, Tong Xiao, Jingbo Zhu
This eases training by highlighting easy samples that the current model has enough competence to learn.
Low Resource Neural Machine Translation Low-Resource Neural Machine Translation +3
no code implementations • 17 Sep 2020 • Zhen Yang, Bojie Hu, Ambyera Han, Shen Huang, Qi Ju
Unlike traditional pre-training method which randomly masks some fragments of the input sentence, the proposed CSP randomly replaces some words in the source sentence with their translation words in the target language.
3 code implementations • ACL 2020 • Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Haotang Deng, Qi Ju
Pre-trained language models like BERT have proven to be highly performant.
no code implementations • 3 Jan 2020 • Pei Xu, Shan Huang, Hongzhen Wang, Hao Song, Shen Huang, Qi Ju
Chinese keyword spotting is a challenging task as there is no visual blank for Chinese words.
1 code implementation • 20 Nov 2019 • Chen Chen, Mengyuan Liu, Xiandong Meng, Wanpeng Xiao, Qi Ju
Therefore, high efficiency object detectors on CPU-only devices are urgently-needed in industry.
2 code implementations • arXiv 2019 • Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, Ping Wang
For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge.
1 code implementation • IJCNLP 2019 • Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, Xiaoyong Du
Existing works, including ELMO and BERT, have revealed the importance of pre-training for NLP tasks.
1 code implementation • 18 May 2018 • Chen Chen, Shuai Mu, Wanpeng Xiao, Zexiong Ye, Liesi Wu, Qi Ju
In this paper, we propose a novel conditional-generative-adversarial-nets-based image captioning framework as an extension of traditional reinforcement-learning (RL)-based encoder-decoder architecture.