no code implementations • COLING 2022 • Zhanyu Ma, Jian Ye, Xurui Yang, Jianfeng Liu
Recently, many task-oriented dialogue systems need to serve users in different languages.
no code implementations • WMT (EMNLP) 2020 • Wei Peng, Jianfeng Liu, Minghan Wang, Liangyou Li, Xupeng Meng, Hao Yang, Qun Liu
This paper describes Huawei’s submissions to the WMT20 biomedical translation shared task.
1 code implementation • 15 Mar 2023 • Daixuan Cheng, Shaohan Huang, Junyu Bi, Yuefeng Zhan, Jianfeng Liu, Yujing Wang, Hao Sun, Furu Wei, Denvy Deng, Qi Zhang
Large Language Models (LLMs) are popular for their impressive abilities, but the need for model-specific fine-tuning or task-specific prompt engineering can hinder their generalization.
1 code implementation • 27 Feb 2023 • Nuo Chen, Hongguang Li, Yinan Bao, Junqing He, Xinshi Lin, Qi Yang, Jianfeng Liu, Ruyi Gan, Jiaxing Zhang, Baoyuan Wang, Jia Li
Thus, model's comprehension ability towards real scenarios are hard to evaluate reasonably.
1 code implementation • 11 Jul 2022 • Yixiong Liang, Shuo Feng, Qing Liu, Hulin Kuang, Jianfeng Liu, Liyan Liao, Yun Du, Jianxin Wang
To mimic these behaviors, we propose to explore contextual relationships to boost the performance of cervical abnormal cell detection.
no code implementations • COLING 2020 • Jianfeng Liu, Ling Luo, Xiang Ao, Yan Song, Haoran Xu, Jian Ye
Multi-source neural machine translation aims to translate from parallel sources of information (e. g. languages, images, etc.)
no code implementations • 24 May 2020 • Jianfeng Liu, Feiyang Pan, Ling Luo
A chatbot that converses like a human should be goal-oriented (i. e., be purposeful in conversation), which is beyond language generation.
no code implementations • 6 Apr 2020 • Choon Meng Lee, Jianfeng Liu, Wei Peng
In training deep learning networks, the optimizer and related learning rate are often used without much thought or with minimal tuning, even though it is crucial in ensuring a fast convergence to a good quality minimum of the loss function that can also generalize well on the test dataset.
no code implementations • WS 2019 • Wei Peng, Jianfeng Liu, Liangyou Li, Qun Liu
This paper describes Huawei{'}s neural machine translation systems for the WMT 2019 biomedical translation shared task.
1 code implementation • 21 Dec 2018 • Yixiong Liang, Yuan Mao, Zhihong Tang, Meng Yan, Yuqian Zhao, Jianfeng Liu
Our method provides a flexible and efficient way to integrate complementary and redundant information from multiple multi-focus ultra HD unregistered images into a fused image that contains better description than any of the individual input images.
1 code implementation • 30 Oct 2018 • Yixiong Liang, Yuan Mao, Jiazhi Xia, Yao Xiang, Jianfeng Liu
Specifically, we propose a scale-invariant structure saliency selection scheme based on the difference-of-Gaussian (DoG) pyramid of images to build the weights or activity map.