no code implementations • ACL 2022 • Ruiqing Zhang, Zhongjun He, Hua Wu, Haifeng Wang
End-to-end simultaneous speech-to-text translation aims to directly perform translation from streaming source speech to target text with high translation quality and low latency.
1 code implementation • Findings (ACL) 2022 • Le Qi, Shangwen Lv, Hongyu Li, Jing Liu, Yu Zhang, Qiaoqiao She, Hua Wu, Haifeng Wang, Ting Liu
Open-domain question answering has been used in a wide range of applications, such as web search and enterprise search, which usually takes clean texts extracted from various formats of documents (e. g., web pages, PDFs, or Word documents) as the information source.
1 code implementation • EMNLP 2021 • Moye Chen, Wei Li, Jiachen Liu, Xinyan Xiao, Hua Wu, Haifeng Wang
Comparing with traditional methods, our method has two main advantages: (1) the relations between sentences are captured by modeling both the graph structure of the whole document set and the candidate sub-graphs; (2) directly outputs an integrate summary in the form of sub-graph which is more informative and coherent.
no code implementations • NAACL (AutoSimTrans) 2022 • Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Haifeng Wang, Liang Huang, Qun Liu, Julia Ive, Wolfgang Macherey
This paper reports the results of the shared task we hosted on the Third Workshop of Automatic Simultaneous Translation (AutoSimTrans).
no code implementations • EMNLP (NLP4ConvAI) 2021 • Xinxian Huang, Huang He, Siqi Bao, Fan Wang, Hua Wu, Haifeng Wang
Large-scale conversation models are turning to leveraging external knowledge to improve the factual accuracy in response generation.
no code implementations • EMNLP 2020 • Lijie Wang, Ao Zhang, Kun Wu, Ke Sun, Zhenghua Li, Hua Wu, Min Zhang, Haifeng Wang
This paper describes in detail the construction process and data statistics of DuSQL.
no code implementations • EMNLP 2020 • Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Haifeng Wang
The policy learns to segment the source text by considering possible translations produced by the translation model, maintaining consistency between the segmentation and translation.
no code implementations • NAACL (AutoSimTrans) 2021 • Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Haifeng Wang
This paper presents the results of the shared task of the 2nd Workshop on Automatic Simultaneous Translation (AutoSimTrans).
no code implementations • 19 Feb 2025 • Yilong Chen, Junyuan Shang, Zhenyu Zhang, Yanxi Xie, Jiawei Sheng, Tingwen Liu, Shuohuan Wang, Yu Sun, Hua Wu, Haifeng Wang
Large language models (LLMs) face inherent performance bottlenecks under parameter constraints, particularly in processing critical tokens that demand complex reasoning.
no code implementations • 19 Feb 2025 • Naibin Gu, Zhenyu Zhang, Xiyu Liu, Peng Fu, Zheng Lin, Shuohuan Wang, Yu Sun, Hua Wu, Weiping Wang, Haifeng Wang
Due to the demand for efficient fine-tuning of large language models, Low-Rank Adaptation (LoRA) has been widely adopted as one of the most effective parameter-efficient fine-tuning methods.
no code implementations • 5 Feb 2025 • Fan Wang, Pengtao Shao, Yiming Zhang, Bo Yu, Shaoshan Liu, Ning Ding, Yang Cao, Yu Kang, Haifeng Wang
We introduce OmniRL, a highly generalizable in-context reinforcement learning (ICRL) model that is meta-trained on hundreds of thousands of diverse tasks.
1 code implementation • 20 Jan 2025 • Haoran Sun, Yekun Chai, Shuohuan Wang, Yu Sun, Hua Wu, Haifeng Wang
Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences, but often at the cost of reduced output diversity.
no code implementations • 6 Jan 2025 • Ting zhao, Zhuoxu Cui, Congcong Liu, Xingyang Wu, Yihang Zhou, Dong Liang, Haifeng Wang
Simultaneous Multi-Slice(SMS) is a magnetic resonance imaging (MRI) technique which excites several slices concurrently using multiband radiofrequency pulses to reduce scanning time.
no code implementations • 7 Dec 2024 • Yilong Chen, Junyuan Shang, Zhengyu Zhang, Jiawei Sheng, Tingwen Liu, Shuohuan Wang, Yu Sun, Hua Wu, Haifeng Wang
MOHD offers a new perspective for scaling the model, showcasing the potential of hidden dimension sparsity to boost efficiency
no code implementations • 27 Nov 2024 • Miao Fan, Jizhou Huang, An Zhuo, Ying Li, Ping Li, Haifeng Wang
We have conducted extensive experiments with the large-scale urban data of several metropolises in China.
1 code implementation • 27 Nov 2024 • Miao Fan, Jizhou Huang, Haifeng Wang
With the increased popularity of mobile devices, Web mapping services have become an indispensable tool in our daily lives.
no code implementations • 12 Oct 2024 • Shuo Zhou, Yihang Zhou, Congcong Liu, Yanjie Zhu, Hairong Zheng, Dong Liang, Haifeng Wang
Magnetic resonance image reconstruction starting from undersampled k-space data requires the recovery of many potential nonlinear features, which is very difficult for algorithms to recover these features.
no code implementations • 9 Oct 2024 • Chaoguang Gong, Yue Hu, Peng Li, Lixian Zou, Congcong Liu, Yihang Zhou, Yanjie Zhu, Dong Liang, Haifeng Wang
Sequence optimization is crucial for improving the accuracy and efficiency of MRF.
1 code implementation • 2 Oct 2024 • Guoxia Wang, Jinle Zeng, Xiyuan Xiao, Siming Wu, Jiabin Yang, Lujing Zheng, Zeyu Chen, Jiang Bian, dianhai yu, Haifeng Wang
In this paper, we propose FlashMask, an extension of FlashAttention that introduces a column-wise sparse representation of attention masks.
no code implementations • 7 Sep 2024 • Abdur Rahman, Jason Street, James Wooten, Mohammad Marufuzzaman, Veera G. Gude, Randy Buchanan, Haifeng Wang
This study explores the use of deep learning and machine vision to predict moisture content classes from RGB images of wood chips.
1 code implementation • 7 Sep 2024 • Abdur Rahman, Lu He, Haifeng Wang
Finally, we statistically investigate the generalization of the resultant activation functions developed through the optimization scheme.
1 code implementation • 5 Sep 2024 • Jizhou Huang, Haifeng Wang, Yibo Sun, Miao Fan, Zhengjie Huang, Chunyuan Yuan, Yawen Li
To mitigate challenge \#2, we construct edges between POI and query nodes based on the co-occurrences between queries and POIs, where queries in different languages and formulations can be aggregated for individual POIs.
no code implementations • 8 Aug 2024 • Ling Lin, Yihang Zhou, Zhanqi Hu, Dian Jiang, Congcong Liu, Shuo Zhou, Yanjie Zhu, Jianxiang Liao, Dong Liang, Hairong Zheng, Haifeng Wang
Tuberous sclerosis complex (TSC) manifests as a multisystem disorder with significant neurological implications.
no code implementations • 7 Aug 2024 • Taofeng Xie, Zhuoxu Cui, Congcong Liu, Chen Luo, Huayu Wang, Yuanzhi Zhang, Xuemei Wang, Yihang Zhou, Qiyu Jin, Guoqing Chen, Dong Liang, Haifeng Wang
In this study, PET is generated from MRI by learning joint probability distribution as the relationship.
no code implementations • 5 Aug 2024 • Ting zhao, Zhuoxu Cui, Sen Jia, Qingyong Zhu, Congcong Liu, Yihang Zhou, Yanjie Zhu, Dong Liang, Haifeng Wang
Diffusion model has been successfully applied to MRI reconstruction, including single and multi-coil acquisition of MRI data.
no code implementations • 19 Jul 2024 • Hongyi Liu, Haifeng Wang
In addition to the development of each branch of vision-based motion measurement methods, this paper also discussed the advantages and disadvantages of existing methods.
1 code implementation • 9 Jul 2024 • Jiankun Li, Hao Li, JiangJiang Liu, Zhikang Zou, Xiaoqing Ye, Fan Wang, Jizhou Huang, Hua Wu, Haifeng Wang
Deep learning-based models are widely deployed in autonomous driving areas, especially the increasingly noticed end-to-end solutions.
1 code implementation • 8 Jul 2024 • Yumeng Zhang, Shi Gong, Kaixin Xiong, Xiaoqing Ye, Xiao Tan, Fan Wang, Jizhou Huang, Hua Wu, Haifeng Wang
The world model consists of two parts: the multi-modal tokenizer and the latent BEV sequence diffusion model.
no code implementations • 7 Apr 2024 • Haifeng Wang, Hao Xu, Jun Wang, Jian Zhou, Ke Deng
Recognizing various surgical tools, actions and phases from surgery videos is an important problem in computer vision with exciting clinical applications.
no code implementations • 27 Feb 2024 • Ruiyang Ren, Peng Qiu, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Hua Wu, Ji-Rong Wen, Haifeng Wang
Due to the excellent capacities of large language models (LLMs), it becomes feasible to develop LLM-based agents for reliable user simulation.
1 code implementation • 11 Jan 2024 • Pengzhi Gao, Zhongjun He, Hua Wu, Haifeng Wang
The training paradigm for machine translation has gradually shifted, from learning neural machine translation (NMT) models with extensive parallel corpora to instruction finetuning on multilingual large language models (LLMs) with high-quality translation pairs.
no code implementations • 24 Nov 2023 • Taofeng Xie, Zhuo-Xu Cui, Chen Luo, Huayu Wang, Congcong Liu, Yuanzhi Zhang, Xuemei Wang, Yanjie Zhu, Guoqing Chen, Dong Liang, Qiyu Jin, Yihang Zhou, Haifeng Wang
The complementary information can contribute to image reconstruction.
1 code implementation • 6 Nov 2023 • Wenxin Wang, Zhuo-Xu Cui, Guanxun Cheng, Chentao Cao, Xi Xu, Ziwei Liu, Haifeng Wang, Yulong Qi, Dong Liang, Yanjie Zhu
However, current supervised learning methods require extensively annotated images and the state-of-the-art generative models used in unsupervised methods often have limitations in covering the whole data distribution.
no code implementations • 7 Oct 2023 • Yuanyuan Liu, Zhuo-Xu Cui, Shucong Qin, Congcong Liu, Hairong Zheng, Haifeng Wang, Yihang Zhou, Dong Liang, Yanjie Zhu
Long scan time significantly hinders the widespread applications of three-dimensional multi-contrast cardiac magnetic resonance (3D-MC-CMR) imaging.
no code implementations • 8 Sep 2023 • Yanrui Du, Sendong Zhao, Yuhan Chen, Rai Bai, Jing Liu, Hua Wu, Haifeng Wang, Bing Qin
To address this issue, it is crucial to analyze and mitigate the influence of superficial clues on STM models.
no code implementations • 30 Aug 2023 • Zhuo-Xu Cui, Congcong Liu, Xiaohong Fan, Chentao Cao, Jing Cheng, Qingyong Zhu, Yuanyuan Liu, Sen Jia, Yihang Zhou, Haifeng Wang, Yanjie Zhu, Jianping Zhang, Qiegen Liu, Dong Liang
In order to enhance interpretability and overcome the acceleration limitations, this paper introduces an interpretable framework that unifies both $k$-space interpolation techniques and image-domain methods, grounded in the physical principles of heat diffusion equations.
1 code implementation • 28 Aug 2023 • Pengzhi Gao, Ruiqing Zhang, Zhongjun He, Hua Wu, Haifeng Wang
Consistency regularization methods, such as R-Drop (Liang et al., 2021) and CrossConST (Gao et al., 2023), have achieved impressive supervised and zero-shot performance in the neural machine translation (NMT) field.
no code implementations • 13 Aug 2023 • Yongheng Sun, Fan Wang, Jun Shu, Haifeng Wang, Li Wang. Deyu Meng, Chunfeng Lian
However, segmentation on longitudinal data is challenging due to dynamic brain changes across the lifespan.
no code implementations • 5 Aug 2023 • Fanshi Li, Zhihui Wang, Yifan Guo, Congcong Liu, Yanjie Zhu, Yihang Zhou, Jun Li, Dong Liang, Haifeng Wang
In this paper, a dynamic dual-graph fusion convolutional network is proposed to improve Alzheimer's disease (AD) diagnosis performance.
1 code implementation • 20 Jul 2023 • Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang
In this study, we present the first analysis on the factual knowledge boundaries of LLMs and how retrieval augmentation affects LLMs on open-domain question answering (QA), with a bunch of important findings.
1 code implementation • 12 Jun 2023 • Pengzhi Gao, Liwen Zhang, Zhongjun He, Hua Wu, Haifeng Wang
Multilingual sentence representations are the foundation for similarity-based bitext mining, which is crucial for scaling multilingual neural machine translation (NMT) system to more languages.
no code implementations • 18 May 2023 • Ruiyang Ren, Wayne Xin Zhao, Jing Liu, Hua Wu, Ji-Rong Wen, Haifeng Wang
Recently, model-based retrieval has emerged as a new paradigm in text retrieval that discards the index in the traditional retrieval model and instead memorizes the candidate corpora using model parameters.
1 code implementation • 12 May 2023 • Pengzhi Gao, Liwen Zhang, Zhongjun He, Hua Wu, Haifeng Wang
The experimental analysis also proves that CrossConST could close the sentence representation gap and better align the representation space.
no code implementations • 6 May 2023 • Taofeng Xie, Chentao Cao, Zhuoxu Cui, Yu Guo, Caiying Wu, Xuemei Wang, Qingneng Li, Zhanli Hu, Tao Sun, Ziru Sang, Yihang Zhou, Yanjie Zhu, Dong Liang, Qiyu Jin, Hongwu Zeng, Guoqing Chen, Haifeng Wang
JPD of MRI and noise-added PET was learned in the diffusion process.
no code implementations • 4 May 2023 • Zhuo-Xu Cui, Congcong Liu, Chentao Cao, Yuanyuan Liu, Jing Cheng, Qingyong Zhu, Yanjie Zhu, Haifeng Wang, Dong Liang
We theoretically uncovered that the combination of these challenges renders conventional deep learning methods that directly learn the mapping from a low-field MR image to a high-field MR image unsuitable.
no code implementations • ICCV 2023 • Yongheng Sun, Fan Wang, Jun Shu, Haifeng Wang, Li Wang, Deyu Meng, Chunfeng Lian
However, segmentation on longitudinal data is challenging due to dynamic brain changes across the lifespan.
no code implementations • 9 Nov 2022 • Bin Shan, Yaqian Han, Weichong Yin, Shuohuan Wang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
Recent cross-lingual cross-modal works attempt to extend Vision-Language Pre-training (VLP) models to non-English inputs and achieve impressive performance.
Ranked #1 on
Multimodal Machine Translation
on Multi30K
no code implementations • 7 Nov 2022 • Guohao Li, Hu Yang, Feng He, Zhifan Feng, Yajuan Lyu, Hua Wu, Haifeng Wang
To this end, we propose a Cross-modaL knOwledge-enhanced Pre-training (CLOP) method with Knowledge Regularizations.
no code implementations • 2 Nov 2022 • Siqi Bao, Huang He, Jun Xu, Hua Lu, Fan Wang, Hua Wu, Han Zhou, Wenquan Wu, Zheng-Yu Niu, Haifeng Wang
Recently, the practical deployment of open-domain dialogue systems has been plagued by the knowledge issue of information deficiency and factual inaccuracy.
2 code implementations • CVPR 2023 • Zhida Feng, Zhenyu Zhang, Xintong Yu, Yewei Fang, Lanxin Li, Xuyi Chen, Yuxiang Lu, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun, Li Chen, Hao Tian, Hua Wu, Haifeng Wang
Recent progress in diffusion models has revolutionized the popular technology of text-to-image generation.
Ranked #12 on
Text-to-Image Generation
on MS COCO
no code implementations • 24 Oct 2022 • Jia Guo, Haifeng Wang, Chenping Hou
We point out that it is not sufficient to only consider the residual loss in adaptive sampling and sampling should obey temporal causality.
no code implementations • 21 Oct 2022 • Yekun Chai, Shuohuan Wang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
Derivative-free prompt learning has emerged as a lightweight alternative to prompt tuning, which only requires model inference to optimize the prompts.
2 code implementations • 12 Oct 2022 • Qiming Peng, Yinxu Pan, Wenjin Wang, Bin Luo, Zhenyu Zhang, Zhengjie Huang, Teng Hu, Weichong Yin, Yongfeng Chen, Yin Zhang, Shikun Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
Recent years have witnessed the rise and success of pre-training techniques in visually-rich document understanding.
Ranked #2 on
Semantic entity labeling
on FUNSD
1 code implementation • 30 Sep 2022 • Bin Shan, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
They attempt to learn cross-modal representation using contrastive learning on image-text pairs, however, the built inter-modal correlations only rely on a single view for each modality.
Ranked #1 on
Image Retrieval
on AIC-ICC
no code implementations • 2 Sep 2022 • Zhuo-Xu Cui, Chentao Cao, Shaonan Liu, Qingyong Zhu, Jing Cheng, Haifeng Wang, Yanjie Zhu, Dong Liang
Recently, score-based diffusion models have shown satisfactory performance in MRI reconstruction.
1 code implementation • 30 Aug 2022 • Hua Lu, Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang
Many open-domain dialogue models pre-trained with social media comments can generate coherent replies but have difficulties producing engaging responses when interacting with real users.
no code implementations • 15 Aug 2022 • Jizhou Huang, Zhengjie Huang, Xiaomin Fang, Shikun Feng, Xuyi Chen, Jiaxiang Liu, Haitao Yuan, Haifeng Wang
In this work, we focus on modeling traffic congestion propagation patterns to improve ETA performance.
1 code implementation • 11 Aug 2022 • Zhuo-Xu Cui, Sen Jia, Qingyong Zhu, Congcong Liu, Zhilang Qiu, Yuanyuan Liu, Jing Cheng, Haifeng Wang, Yanjie Zhu, Dong Liang
Recently, untrained neural networks (UNNs) have shown satisfactory performances for MR image reconstruction on random sampling trajectories without using additional full-sampled training data.
no code implementations • 17 Jul 2022 • Yuanyuan Liu, Dong Liang, Zhuo-Xu Cui, Yuxin Yang, Chentao Cao, Qingyong Zhu, Jing Cheng, Caiyun Shi, Haifeng Wang, Yanjie Zhu
Prospective reconstruction results further demonstrate the capability of the SMART method in accelerating MR T1\r{ho} imaging.
no code implementations • 28 Jun 2022 • Han Zhou, Xinchao Xu, Wenquan Wu, Zheng-Yu Niu, Hua Wu, Siqi Bao, Fan Wang, Haifeng Wang
Making chatbots world aware in a conversation like a human is a crucial challenge, where the world may contain dynamic knowledge and spatiotemporal state.
no code implementations • 22 Jun 2022 • Xinyu Zhang, Peng Peng, Yushan Zhou, Haifeng Wang, Wenxin Li
First, there is inaccuracy when analysing the simplified payoff table.
1 code implementation • NAACL 2022 • Pengzhi Gao, Zhongjun He, Hua Wu, Haifeng Wang
We introduce Bi-SimCut: a simple but effective training strategy to boost neural machine translation (NMT) performance.
Ranked #1 on
Machine Translation
on WMT2014 German-English
1 code implementation • 25 May 2022 • Yanrui Du, Jing Yan, Yan Chen, Jing Liu, Sendong Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Bing Qin
In this study, we focus on the spurious correlation between word features and labels that models learn from the biased data distribution of training data.
no code implementations • 23 May 2022 • Lijie Wang, Yaozong Shen, Shuyuan Peng, Shuai Zhang, Xinyan Xiao, Hao liu, Hongxuan Tang, Ying Chen, Hua Wu, Haifeng Wang
Based on this benchmark, we conduct experiments on three typical models with three saliency methods, and unveil their strengths and weakness in terms of interpretability.
no code implementations • 18 May 2022 • Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang
Our method 1) introduces a self on-the-fly distillation method that can effectively distill late interaction (i. e., ColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation process to further improve the performance with a cross-encoder teacher.
no code implementations • 17 May 2022 • Shanzhuo Zhang, Zhiyuan Yan, Yueyang Huang, Lihang Liu, Donglong He, Wei Wang, Xiaomin Fang, Xiaonan Zhang, Fan Wang, Hua Wu, Haifeng Wang
Additionally, the pre-trained model provided by H-ADMET can be fine-tuned to generate new and customised ADMET endpoints, meeting various demands of drug research and development requirements.
no code implementations • 27 Apr 2022 • Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qifei Wu, Yuchen Ding, Hua Wu, Haifeng Wang, Ji-Rong Wen
Recent years have witnessed the significant advance in dense retrieval (DR) based on powerful pre-trained language models (PLM).
no code implementations • 22 Apr 2022 • Shihang Wang, Xinchao Xu, Wenquan Wu, Zheng-Yu Niu, Hua Wu, Haifeng Wang
In this task, the agent conducts empathetic responses along with the target of eliciting the user's positive emotions in the multi-turn dialog.
no code implementations • ACL 2022 • Zeming Liu, Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu
For example, users have determined the departure, the destination, and the travel time for booking a flight.
1 code implementation • 5 Apr 2022 • Hui Tao, Haifeng Wang, Shanshan Wang, Dong Liang, Xiaoling Xu, Qiegen Liu
Parallel imaging is widely used in magnetic resonance imaging as an acceleration technology.
no code implementations • 23 Mar 2022 • Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
We argue that two factors, information bottleneck sensitivity and inconsistency between different attention topologies, could affect the performance of the Sparse Transformer.
2 code implementations • 19 Mar 2022 • Yifu Qiu, Hongyu Li, Yingqi Qu, Ying Chen, Qiaoqiao She, Jing Liu, Hua Wu, Haifeng Wang
In this paper, we present DuReader_retrieval, a large-scale Chinese dataset for passage retrieval.
no code implementations • 17 Mar 2022 • Jizhou Huang, Haifeng Wang, Yibo Sun, Yunsheng Shi, Zhengjie Huang, An Zhuo, Shikun Feng
One of the main reasons for this plateau is the lack of readily available geographic knowledge in generic PTMs.
1 code implementation • Findings (ACL) 2022 • Wei Li, Can Gao, guocheng niu, Xinyan Xiao, Hao liu, Jiachen Liu, Hua Wu, Haifeng Wang
In particular, we propose to conduct grounded learning on both images and texts via a sharing grounded space, which helps bridge unaligned images and texts, and align the visual and textual semantic spaces on different types of corpora.
1 code implementation • Findings (ACL) 2022 • Xinchao Xu, Zhibin Gou, Wenquan Wu, Zheng-Yu Niu, Hua Wu, Haifeng Wang, Shihang Wang
Most of the open-domain dialogue models tend to perform poorly in the setting of long-term human-bot conversations.
no code implementations • 5 Feb 2022 • Peiying Zhang, Xingzhe Huang, Yaqi Wang, Chunxiao Jiang, Shuqing He, Haifeng Wang
Experimental results show that the matching of sentence similarity calculation method based on multi model nonlinear fusion is 84%, and the F1 value of the model is 75%.
no code implementations • 3 Jan 2022 • Yibin Wang, Haifeng Wang, Zhaohua Peng
In this research, an attention-based depthwise separable neural network with Bayesian optimization (ADSNN-BO) is proposed to detect and classify rice disease from rice leaf images.
no code implementations • 3 Jan 2022 • Yibin Wang, Abdur Rahman, W. Neil. Duggar, P. Russell Roberts, Toms V. Thomas, Linkan Bian, Haifeng Wang
However, manual annotation of lymph node region is a required data preprocessing step in most of the current ML-based ECE diagnosis studies.
2 code implementations • 31 Dec 2021 • Han Zhang, Weichong Yin, Yewei Fang, Lanxin Li, Boqiang Duan, Zhihua Wu, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
To explore the landscape of large-scale pre-training for bidirectional text-image generation, we train a 10-billion parameter ERNIE-ViLG model on a large-scale dataset of 145 million (Chinese) image-text pairs which achieves state-of-the-art performance for both text-to-image and image-to-text tasks, obtaining an FID of 7. 9 on MS-COCO for text-to-image synthesis and best results on COCO-CN and AIC-ICC for image captioning.
Ranked #41 on
Text-to-Image Generation
on MS COCO
no code implementations • 23 Dec 2021 • Xin Tian, Xinxian Huang, Dongfeng He, Yingzhan Lin, Siqi Bao, Huang He, Liankai Huang, Qiang Ju, Xiyuan Zhang, Jian Xie, Shuqi Sun, Fan Wang, Hua Wu, Haifeng Wang
Task-oriented dialogue systems have been plagued by the difficulties of obtaining large-scale and high-quality annotated conversations.
3 code implementations • 23 Dec 2021 • Shuohuan Wang, Yu Sun, Yang Xiang, Zhihua Wu, Siyu Ding, Weibao Gong, Shikun Feng, Junyuan Shang, Yanbin Zhao, Chao Pang, Jiaxiang Liu, Xuyi Chen, Yuxiang Lu, Weixin Liu, Xi Wang, Yangfan Bai, Qiuliang Chen, Li Zhao, Shiyong Li, Peng Sun, dianhai yu, Yanjun Ma, Hao Tian, Hua Wu, Tian Wu, Wei Zeng, Ge Li, Wen Gao, Haifeng Wang
A unified framework named ERNIE 3. 0 was recently proposed for pre-training large-scale knowledge enhanced models and trained a model with 10 billion parameters.
no code implementations • 18 Dec 2021 • Zhuo-Xu Cui, Jing Cheng, Qingyong Zhu, Yuanyuan Liu, Sen Jia, Kankan Zhao, Ziwen Ke, Wenqi Huang, Haifeng Wang, Yanjie Zhu, Dong Liang
Specifically, focusing on accelerated MRI, we unroll a zeroth-order algorithm, of which the network module represents the regularizer itself, so that the network output can be still covered by the regularization model.
1 code implementation • 16 Dec 2021 • Hongyu Zhu, Yan Chen, Jing Yan, Jing Liu, Yu Hong, Ying Chen, Hua Wu, Haifeng Wang
For this purpose, we create a Chinese dataset namely DuQM which contains natural questions with linguistic perturbations to evaluate the robustness of question matching models.
1 code implementation • 6 Dec 2021 • Yulong Ao, Zhihua Wu, dianhai yu, Weibao Gong, Zhiqing Kui, Minxu Zhang, Zilingfeng Ye, Liang Shen, Yanjun Ma, Tian Wu, Haifeng Wang, Wei Zeng, Chao Yang
The experiments demonstrate that our framework can satisfy various requirements from the diversity of applications and the heterogeneity of resources with highly competitive performance.
no code implementations • 1 Dec 2021 • Yanjie Zhu, Haoxiang Li, Yuanyuan Liu, Muzi Guo, Guanxun Cheng, Gang Yang, Haifeng Wang, Dong Liang
Methods: The proposed framework consists of a reconstruction module and a generative module.
1 code implementation • 18 Nov 2021 • Zijing Liu, Xianbin Ye, Xiaomin Fang, Fan Wang, Hua Wu, Haifeng Wang
Machine learning shows great potential in virtual screening for drug discovery.
1 code implementation • 25 Oct 2021 • Moye Chen, Wei Li, Jiachen Liu, Xinyan Xiao, Hua Wu, Haifeng Wang
Comparing with traditional methods, our method has two main advantages: (1) the relations between sentences are captured by modeling both the graph structure of the whole document set and the candidate sub-graphs; (2) directly outputs an integrate summary in the form of sub-graph which is more informative and coherent.
1 code implementation • 14 Oct 2021 • Quan Wang, Songtai Dai, Benfeng Xu, Yajuan Lyu, Yong Zhu, Hua Wu, Haifeng Wang
In this work we introduce eHealth, a Chinese biomedical PLM built from scratch with a new pre-training framework.
1 code implementation • EMNLP 2021 • Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Ji-Rong Wen
In this paper, we propose a novel joint training approach for dense passage retrieval and passage re-ranking.
no code implementations • 29 Sep 2021 • Yang Liu, Jiaxiang Liu, Yuxiang Lu, Shikun Feng, Yu Sun, Zhida Feng, Li Chen, Hao Tian, Hua Wu, Haifeng Wang
The first factor is information bottleneck sensitivity, which is caused by the key feature of Sparse Transformer — only a small number of global tokens can attend to all other tokens.
no code implementations • 29 Sep 2021 • Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Yang Cao, Yu Kang, Haifeng Wang
While artificial neural networks (ANNs) have been widely adopted in machine learning, researchers are increasingly obsessed by the gaps between ANNs and natural neural networks (NNNs).
1 code implementation • 26 Sep 2021 • Chen Hu, Cheng Li, Haifeng Wang, Qiegen Liu, Hairong Zheng, Shanshan Wang
Specifically, during model optimization, two subsets are constructed by randomly selecting part of k-space data from the undersampled data and then fed into two parallel reconstruction networks to perform information recovery.
3 code implementations • 20 Sep 2021 • Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Zhihua Wu, Zhen Guo, Hua Lu, Xinxian Huang, Xin Tian, Xinchao Xu, Yingzhan Lin, Zheng-Yu Niu
To explore the limit of dialogue generation pre-training, we present the models of PLATO-XL with up to 11 billion parameters, trained on both Chinese and English social media conversations.
1 code implementation • EMNLP 2021 • Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che
In this paper, we provide a bilingual parallel human-to-human recommendation dialog dataset (DuRecDial 2. 0) to enable researchers to explore a challenging task of multilingual and cross-lingual conversational recommendation.
2 code implementations • 8 Sep 2021 • Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Jie Fu, Yang Cao, Yu Kang, Haifeng Wang
In contrast, biological neural networks (BNNs) can adapt to various new tasks by continually updating the neural connections based on the inputs, which is aligned with the paradigm of learning effective learning rules in addition to static parameters, e. g., meta-learning.
no code implementations • Findings (EMNLP) 2021 • Jicheng Li, Pengzhi Gao, Xuanfu Wu, Yang Feng, Zhongjun He, Hua Wu, Haifeng Wang
To further improve the faithfulness and diversity of the translations, we propose two simple but effective approaches to select diverse sentence pairs in the training corpus and adjust the interpolation weight for each pair correspondingly.
no code implementations • 30 Aug 2021 • Lijie Wang, Hao liu, Shuyuan Peng, Hongxuan Tang, Xinyan Xiao, Ying Chen, Hua Wu, Haifeng Wang
Therefore, in order to systematically evaluate the factors for building trustworthy systems, we propose a novel and well-annotated sentiment analysis dataset to evaluate robustness and interpretability.
no code implementations • 20 Aug 2021 • Yibo Sun, Jizhou Huang, Chunyuan Yuan, Miao Fan, Haifeng Wang, Ming Liu, Bing Qin
We approach this task as a sequence tagging problem, where the goal is to produce <POI name, accessibility label> pairs from unstructured text.
1 code implementation • Findings (ACL) 2021 • Ruiyang Ren, Shangwen Lv, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Ji-Rong Wen
Recently, dense passage retrieval has become a mainstream approach to finding relevant information in various natural language processing tasks.
1 code implementation • ACL 2021 • Hongxuan Tang, Hongyu Li, Jing Liu, Yu Hong, Hua Wu, Haifeng Wang
Machine reading comprehension (MRC) is a crucial task in natural language processing and has achieved remarkable advancements.
no code implementations • ACL 2021 • Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che
Learning discrete dialog structure graph from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation.
2 code implementations • 5 Jul 2021 • Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, Weixin Liu, Zhihua Wu, Weibao Gong, Jianzhong Liang, Zhizhou Shang, Peng Sun, Wei Liu, Xuan Ouyang, dianhai yu, Hao Tian, Hua Wu, Haifeng Wang
We trained the model with 10 billion parameters on a 4TB corpus consisting of plain texts and a large-scale knowledge graph.
no code implementations • 11 Jun 2021 • Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, Haifeng Wang
Recent advances in graph neural networks (GNNs) have shown great promise in applying GNNs for molecular representation learning.
Ranked #2 on
Molecular Property Prediction
on QM9
1 code implementation • 4 Jun 2021 • Weiyue Su, Xuyi Chen, Shikun Feng, Jiaxiang Liu, Weixin Liu, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
Specifically, the first stage, General Distillation, performs distillation with guidance from pretrained teacher, gerenal data and latent distillation loss.
no code implementations • ACL 2021 • Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Ziqiang Cao, Sujian Li, Hua Wu, Haifeng Wang
Abstractive summarization for long-document or multi-document remains challenging for the Seq2Seq architecture, as Seq2Seq is not good at analyzing long-distance relations in text.
1 code implementation • Findings (ACL) 2021 • Quan Wang, Haifeng Wang, Yajuan Lyu, Yong Zhu
The key to our approach is to represent the n-ary structure of a fact as a small heterogeneous graph, and model this graph with edge-biased fully-connected attention.
1 code implementation • 6 May 2021 • Siqi Bao, Bingjin Chen, Huang He, Xin Tian, Han Zhou, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Yingzhan Lin
In this work, we explore the application of PLATO-2 on various dialogue systems, including open-domain conversation, knowledge grounded dialogue, and task-oriented conversation.
no code implementations • NAACL (AutoSimTrans) 2021 • Ruiqing Zhang, Xiyang Wang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Zhi Li, Haifeng Wang, Ying Chen, Qinfei Li
This corpus is expected to promote the research of automatic simultaneous translation as well as the development of practical systems.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
1 code implementation • 9 Mar 2021 • Ziwen Ke, Zhuo-Xu Cui, Wenqi Huang, Jing Cheng, Sen Jia, Haifeng Wang, Xin Liu, Hairong Zheng, Leslie Ying, Yanjie Zhu, Dong Liang
The nonlinear manifold is designed to characterize the temporal correlation of dynamic signals.
1 code implementation • EMNLP 2021 • Kun Wu, Lijie Wang, Zhenghua Li, Ao Zhang, Xinyan Xiao, Hua Wu, Min Zhang, Haifeng Wang
For better distribution matching, we require that at least 80% of SQL patterns in the training data are covered by generated queries.
1 code implementation • 3 Feb 2021 • Huang He, Hua Lu, Siqi Bao, Fan Wang, Hua Wu, ZhengYu Niu, Haifeng Wang
The Track-1 of DSTC9 aims to effectively answer user requests or questions during task-oriented dialogues, which are out of the scope of APIs/DB.
no code implementations • 1 Jan 2021 • Chenze Shao, Meng Sun, Yang Feng, Zhongjun He, Hua Wu, Haifeng Wang
Under this framework, we introduce word-level ensemble learning and sequence-level ensemble learning for neural machine translation, where sequence-level ensemble learning is capable of aggregating translation models with different decoding strategies.
2 code implementations • EMNLP 2021 • Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance.
Ranked #14 on
Zero-Shot Cross-Lingual Transfer
on XTREME
3 code implementations • ACL 2021 • Siyu Ding, Junyuan Shang, Shuohuan Wang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
Transformers are not suited for processing long documents, due to their quadratically increasing memory and time consumption.
Ranked #1000000000 on
Text Classification
on IMDb
no code implementations • 31 Dec 2020 • Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
Learning interpretable dialog structure from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation.
3 code implementations • ACL 2021 • Wei Li, Can Gao, guocheng niu, Xinyan Xiao, Hao liu, Jiachen Liu, Hua Wu, Haifeng Wang
Existed pre-training methods either focus on single-modal tasks or multi-modal tasks, and cannot effectively adapt to each other.
Ranked #4 on
Image Captioning
on MS COCO
no code implementations • 24 Nov 2020 • Miao Yang, Hongbin Zhu, Hua Qian, Yevgeni Koucheryavy, Konstantin Samouylov, Haifeng Wang
Besides, peer competition occurs when different FNs offload tasks to one FN at the same time.
no code implementations • 23 Nov 2020 • Miao Yang, Akitanoshou Wong, Hongbin Zhu, Haifeng Wang, Hua Qian
Based on the scheme, a device selection algorithm towards minimal class imbalance is proposed, thus can improve the convergence performance of the global model.
2 code implementations • NAACL 2021 • Dongling Xiao, Yu-Kun Li, Han Zhang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
We argue that such contiguously masking method neglects to model the intra-dependencies and inter-relation of coarse-grained linguistic information.
1 code implementation • NAACL 2021 • Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, daxiang dong, Hua Wu, Haifeng Wang
In open-domain question answering, dense passage retrieval has become a new paradigm to retrieve relevant passages for finding answers.
Ranked #4 on
Passage Retrieval
on Natural Questions
no code implementations • ACL 2020 • Jun Xu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
To address the challenge of policy learning in open-domain multi-turn conversation, we propose to represent prior information about dialog transitions as a graph and learn a graph grounded dialog policy, aimed at fostering a more coherent and controllable dialog.
3 code implementations • Findings (ACL) 2021 • Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Zhen Guo, Zhibin Liu, Xinchao Xu
To build a high-quality open-domain chatbot, we introduce the effective training process of PLATO-2 via curriculum learning.
no code implementations • 30 Jun 2020 • Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
Thus, ERNIE-ViL can learn the joint representations characterizing the alignments of the detailed semantics across vision and language.
Ranked #2 on
Visual Question Answering (VQA)
on VCR (QA-R) test
no code implementations • 22 Jun 2020 • Ziwen Ke, Wenqi Huang, Jing Cheng, Zhuoxu Cui, Sen Jia, Haifeng Wang, Xin Liu, Hairong Zheng, Leslie Ying, Yanjie Zhu, Dong Liang
The deep learning methods have achieved attractive performance in dynamic MR cine imaging.
2 code implementations • ACL 2020 • Wei Li, Xinyan Xiao, Jiachen Liu, Hua Wu, Haifeng Wang, Junping Du
Graphs that capture relations between textual units have great benefits for detecting salient information from multiple documents and generating overall coherent summaries.
7 code implementations • ACL 2020 • Hao Tian, Can Gao, Xinyan Xiao, Hao liu, Bolei He, Hua Wu, Haifeng Wang, Feng Wu
In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair.
Ranked #14 on
Stock Market Prediction
on Astock
2 code implementations • ACL 2020 • Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
We propose a new task of conversational recommendation over multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e. g., QA) to a recommendation dialog, taking into account user's interests and feedback.
no code implementations • 6 May 2020 • Jizhou Huang, Haifeng Wang, Haoyi Xiong, Miao Fan, An Zhuo, Ying Li, Dejing Dou
While these strategies have effectively dealt with the critical situations of outbreaks, the combination of the pandemic and mobility controls has slowed China's economic growth, resulting in the first quarterly decline of Gross Domestic Product (GDP) since GDP began to be calculated, in 1992.
3 code implementations • 23 Apr 2020 • Hongxuan Tang, Hongyu Li, Jing Liu, Yu Hong, Hua Wu, Haifeng Wang
Machine reading comprehension (MRC) is a crucial task in natural language processing and has achieved remarkable advancements.
6 code implementations • 26 Jan 2020 • Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.
Ranked #1 on
Question Generation
on SQuAD1.1
(using extra training data)
1 code implementation • 16 Dec 2019 • Yuchen Liu, Jiajun Zhang, Hao Xiong, Long Zhou, Zhongjun He, Hua Wu, Haifeng Wang, Cheng-qing Zong
Speech-to-text translation (ST), which translates source language speech into target language text, has attracted intensive attention in recent years.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
3 code implementations • 6 Nov 2019 • Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, Hua Wu
This work presents Contextualized Knowledge Graph Embedding (CoKE), a novel paradigm that takes into account such contextual nature, and learns dynamic, flexible, and fully contextualized entity and relation embeddings.
1 code implementation • WS 2019 • Hongyu Li, Xiyuan Zhang, Yibing Liu, Yiming Zhang, Quan Wang, Xiangyang Zhou, Jing Liu, Hua Wu, Haifeng Wang
In this paper, we introduce a simple system Baidu submitted for MRQA (Machine Reading for Question Answering) 2019 Shared Task that focused on generalization of machine reading comprehension (MRC) models.
3 code implementations • ACL 2020 • Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang
Pre-training models have been proved effective for a wide range of natural language processing tasks.
no code implementations • IJCNLP 2019 • Tianchi Bi, Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang
Conventional Neural Machine Translation (NMT) models benefit from the training with an additional agent, e. g., dual learning, and bidirectional decoding with one agent decoding from left to right and the other decoding in the opposite direction.
no code implementations • 7 Aug 2019 • Jing Cheng, Haifeng Wang, Leslie Ying, Dong Liang
Experi-ments on in vivo MR data demonstrate that the proposed method achieves supe-rior MR reconstructions from highly undersampled k-space data over other state-of-the-art image reconstruction methods.
no code implementations • WS 2019 • Meng Sun, Bojian Jiang, Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang
In this paper we introduce the systems Baidu submitted for the WMT19 shared task on Chinese{\textless}-{\textgreater}English news translation.
no code implementations • 30 Jul 2019 • Hao Xiong, Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Haifeng Wang
In this paper, we present DuTongChuan, a novel context-aware translation model for simultaneous interpreting.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
3 code implementations • 29 Jul 2019 • Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, Haifeng Wang
Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.
Ranked #1 on
Chinese Sentence Pair Classification
on LCQMC Dev
Chinese Named Entity Recognition
Chinese Reading Comprehension
+8
no code implementations • AAAI-2019 2019 • Dai Dai, Xinyan Xiao, Yajuan Lyu, Shan Dou, Qiaoqiao She, Haifeng Wang
Joint entity and relation extraction is to detect entity and relation using a single model.
Ranked #2 on
Relation Extraction
on NYT-single
no code implementations • ACL 2019 • Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, Haifeng Wang
Konv enables a very challenging task as the model needs to both understand dialogue and plan over the given knowledge graph.
no code implementations • 19 Jun 2019 • Jing Cheng, Haifeng Wang, Yanjie Zhu, Qiegen Liu, Qiyang Zhang, Ting Su, Jianwei Chen, Yongshuai Ge, Zhanli Hu, Xin Liu, Hairong Zheng, Leslie Ying, Dong Liang
Usually, acquiring less data is a direct but important strategy to address these issues.
7 code implementations • 13 Jun 2019 • Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, Haifeng Wang
DuConv enables a very challenging task as the model needs to both understand dialogue and plan over the given knowledge graph.
no code implementations • 17 Apr 2019 • Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, Cheng-qing Zong
End-to-end speech translation (ST), which directly translates from source language speech into target language text, has attracted intensive attentions in recent years.
1 code implementation • IJCNLP 2019 • Zhibin Liu, Zheng-Yu Niu, Hua Wu, Haifeng Wang
Two types of knowledge, triples from knowledge graphs and texts from documents, have been studied for knowledge aware open-domain conversation generation, in which graph paths can narrow down vertex candidates for knowledge selection decision, and texts can provide rich information for response generation.
no code implementations • 4 Dec 2018 • Haifeng Wang
The proposed movie genre recommendation system solves problems such as small dataset, imbalanced response, and unequal classification costs.
no code implementations • 14 Nov 2018 • Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang
Discourse coherence plays an important role in the translation of one text.
3 code implementations • ACL 2019 • Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, Haifeng Wang
Simultaneous translation, which translates sentences before they are finished, is useful in many scenarios but is notoriously difficult due to word-order differences.
no code implementations • ACL 2018 • Yizhong Wang, Kai Liu, Jing Liu, wei he, Yajuan Lyu, Hua Wu, Sujian Li, Haifeng Wang
Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine.
Ranked #3 on
Question Answering
on MS MARCO
3 code implementations • WS 2018 • Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yu-An Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, Haifeng Wang
Experiments show that human performance is well above current state-of-the-art baseline systems, leaving plenty of room for the community to make improvements.
no code implementations • EMNLP 2017 • Man Lan, Jianxiang Wang, Yuanbin Wu, Zheng-Yu Niu, Haifeng Wang
We present a novel multi-task attention based neural network model to address implicit discourse relationship representation and identification through two types of representation learning, an attention based neural network for learning discourse relationship representation with two arguments and a multi-task framework for learning knowledge from annotated and unannotated corpora.
no code implementations • COLING 2016 • Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu, Jun Xu
This paper describes a unified neural architecture for identifying and classifying multi-typed semantic relations between words in a sentence.
no code implementations • COLING 2016 • Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu
Various treebanks have been released for dependency parsing.
1 code implementation • COLING 2016 • Zhe Wang, wei he, Hua Wu, Haiyang Wu, Wei Li, Haifeng Wang, Enhong Chen
Chinese poetry generation is a very challenging task in natural language processing.
no code implementations • 3 Jun 2016 • Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu
Various treebanks have been released for dependency parsing.
no code implementations • 5 Mar 2016 • Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, Ting Liu
Cross-lingual model transfer has been a promising approach for inducing dependency parsers for low-resource languages where annotated treebanks are not available.
Cross-lingual zero-shot dependency parsing
Representation Learning