Search Results for author: Hua Wu

Found 124 papers, 53 papers with code

Diversified Multiple Instance Learning for Document-Level Multi-Aspect Sentiment Classification

no code implementations EMNLP 2020 Yunjie Ji, Hao liu, Bolei He, Xinyan Xiao, Hua Wu, Yanhua Yu

To this end, we propose a novel Diversified Multiple Instance Learning Network (D-MILN), which is able to achieve aspect-level sentiment classification with only document-level weak supervision.

General Classification Multiple Instance Learning +1

Learning Adaptive Segmentation Policy for Simultaneous Translation

no code implementations EMNLP 2020 Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Haifeng Wang

The policy learns to segment the source text by considering possible translations produced by the translation model, maintaining consistency between the segmentation and translation.

Translation

Learning Adaptive Segmentation Policy for End-to-End Simultaneous Translation

no code implementations ACL 2022 Ruiqing Zhang, Zhongjun He, Hua Wu, Haifeng Wang

End-to-end simultaneous speech-to-text translation aims to directly perform translation from streaming source speech to target text with high translation quality and low latency.

Simultaneous Speech-to-Text Translation Translation

\textrm{DuReader}_{\textrm{vis}}: A Chinese Dataset for Open-domain Document Visual Question Answering

1 code implementation Findings (ACL) 2022 Le Qi, Shangwen Lv, Hongyu Li, Jing Liu, Yu Zhang, Qiaoqiao She, Hua Wu, Haifeng Wang, Ting Liu

Open-domain question answering has been used in a wide range of applications, such as web search and enterprise search, which usually takes clean texts extracted from various formats of documents (e. g., web pages, PDFs, or Word documents) as the information source.

Open-Domain Question Answering Visual Question Answering

SgSum:Transforming Multi-document Summarization into Sub-graph Selection

1 code implementation EMNLP 2021 Moye Chen, Wei Li, Jiachen Liu, Xinyan Xiao, Hua Wu, Haifeng Wang

Comparing with traditional methods, our method has two main advantages: (1) the relations between sentences are captured by modeling both the graph structure of the whole document set and the candidate sub-graphs; (2) directly outputs an integrate summary in the form of sub-graph which is more informative and coherent.

Document Summarization Multi-Document Summarization

PLATO-KAG: Unsupervised Knowledge-Grounded Conversation via Joint Modeling

no code implementations EMNLP (NLP4ConvAI) 2021 Xinxian Huang, Huang He, Siqi Bao, Fan Wang, Hua Wu, Haifeng Wang

Large-scale conversation models are turning to leveraging external knowledge to improve the factual accuracy in response generation.

Response Generation

SINC: Service Information Augmented Open-Domain Conversation

no code implementations28 Jun 2022 Han Zhou, Xinchao Xu, Wenquan Wu, ZhengYu Niu, Hua Wu, Siqi Bao, Fan Wang, Haifeng Wang

Generative open-domain dialogue systems can benefit from external knowledge, but the lack of external knowledge resources and the difficulty in finding relevant knowledge limit the development of this technology.

Bi-SimCut: A Simple Strategy for Boosting Neural Machine Translation

no code implementations6 Jun 2022 Pengzhi Gao, Zhongjun He, Hua Wu, Haifeng Wang

We introduce Bi-SimCut: a simple but effective training strategy to boost neural machine translation (NMT) performance.

Machine Translation Translation

Less Learn Shortcut: Analyzing and Mitigating Learning of Spurious Feature-Label Correlation

no code implementations25 May 2022 Yanrui Du, Jing Yan, Yan Chen, Jing Liu, Sendong Zhao, Hua Wu, Haifeng Wang, Bing Qin

Many recent works indicate that the deep neural networks tend to take dataset biases as shortcuts to make decision, rather than understand the tasks, which results in failures on the real-world applications.

A Fine-grained Interpretability Evaluation Benchmark for Neural NLP

no code implementations23 May 2022 Lijie Wang, Yaozong Shen, Shuyuan Peng, Shuai Zhang, Xinyan Xiao, Hao liu, Hongxuan Tang, Ying Chen, Hua Wu, Haifeng Wang

We also design a new metric, i. e., the consistency between the rationales before and after perturbations, to uniformly evaluate the interpretability of models and saliency methods on different tasks.

Reading Comprehension Sentiment Analysis

ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval

no code implementations18 May 2022 Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang

Our method 1) introduces a self on-the-fly distillation method that can effectively distill late interaction (i. e., ColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation process to further improve the performance with a cross-encoder teacher.

Knowledge Distillation Open-Domain Question Answering +1

HelixADMET: a robust and endpoint extensible ADMET system incorporating self-supervised knowledge transfer

no code implementations17 May 2022 Shanzhuo Zhang, Zhiyuan Yan, Yueyang Huang, Lihang Liu, Donglong He, Wei Wang, Xiaomin Fang, Xiaonan Zhang, Fan Wang, Hua Wu, Haifeng Wang

Additionally, the pre-trained model provided by H-ADMET can be fine-tuned to generate new and customised ADMET endpoints, meeting various demands of drug research and development requirements.

Drug Discovery Self-Supervised Learning +1

A Thorough Examination on Zero-shot Dense Retrieval

no code implementations27 Apr 2022 Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qifei Wu, Yuchen Ding, Hua Wu, Haifeng Wang, Ji-Rong Wen

Recent years have witnessed the significant advance in dense retrieval (DR) based on powerful pre-trained language models (PLM).

Towards Multi-Turn Empathetic Dialogs with Positive Emotion Elicitation

no code implementations22 Apr 2022 Shihang Wang, Xinchao Xu, Wenquan Wu, Zheng-Yu Niu, Hua Wu, Haifeng Wang

In this task, the agent conducts empathetic responses along with the target of eliciting the user's positive emotions in the multi-turn dialog.

Unified Structure Generation for Universal Information Extraction

1 code implementation ACL 2022 Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, Hua Wu

Information extraction suffers from its varying targets, heterogeneous structures, and demand-specific schemas.

ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention

no code implementations23 Mar 2022 Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

We argue that two factors, information bottleneck sensitivity and inconsistency between different attention topologies, could affect the performance of the Sparse Transformer.

Sparse Learning Text Classification

UNIMO-2: End-to-End Unified Vision-Language Grounded Learning

1 code implementation Findings (ACL) 2022 Wei Li, Can Gao, guocheng niu, Xinyan Xiao, Hao liu, Jiachen Liu, Hua Wu, Haifeng Wang

In particular, we propose to conduct grounded learning on both images and texts via a sharing grounded space, which helps bridge unaligned images and texts, and align the visual and textual semantic spaces on different types of corpora.

PLANET: Dynamic Content Planning in Autoregressive Transformers for Long-form Text Generation

no code implementations ACL 2022 Zhe Hu, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Hua Wu, Lifu Huang

Despite recent progress of pre-trained language models on generating fluent text, existing methods still suffer from incoherence problems in long-form text generation tasks that require proper content control and planning to form a coherent high-level logical flow.

Contrastive Learning Text Generation

Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods

no code implementations10 Mar 2022 Wei Li, Wenhao Wu, Moye Chen, Jiachen Liu, Xinyan Xiao, Hua Wu

In this survey, we provide a systematic overview of the research progress on the faithfulness problem of NLG, including problem analysis, evaluation metrics and optimization methods.

Abstractive Text Summarization Data-to-Text Generation +2

ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation

1 code implementation31 Dec 2021 Han Zhang, Weichong Yin, Yewei Fang, Lanxin Li, Boqiang Duan, Zhihua Wu, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

To explore the landscape of large-scale pre-training for bidirectional text-image generation, we train a 10-billion parameter ERNIE-ViLG model on a large-scale dataset of 145 million (Chinese) image-text pairs which achieves state-of-the-art performance for both text-to-image and image-to-text tasks, obtaining an FID of 7. 9 on MS-COCO for text-to-image synthesis and best results on COCO-CN and AIC-ICC for image captioning.

Image Captioning Quantization +3

DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching Models

1 code implementation16 Dec 2021 Hongyu Zhu, Yan Chen, Jing Yan, Jing Liu, Yu Hong, Ying Chen, Hua Wu, Haifeng Wang

For this purpose, we create a Chinese dataset namely DuQM which contains natural questions with linguistic perturbations to evaluate the robustness of question matching models.

Learning with Noisy Correspondence for Cross-modal Matching

no code implementations NeurIPS 2021 Zhenyu Huang, guocheng niu, Xiao Liu, Wenbiao Ding, Xinyan Xiao, Hua Wu, Xi Peng

Based on this observation, we reveal and study a latent and challenging direction in cross-modal matching, named noisy correspondence, which could be regarded as a new paradigm of noisy labels.

Cross-Modal Retrieval Text Matching

CELLS: Cost-Effective Evolution in Latent Space for Goal-Directed Molecular Generation

no code implementations30 Nov 2021 ZhiYuan Chen, Xiaomin Fang, Fan Wang, Xiaotian Fan, Hua Wu, Haifeng Wang

We adopt a pre-trained molecular generative model to map the latent and observation spaces, taking advantage of the large-scale unlabeled molecules to learn chemical knowledge.

Drug Discovery

Amendable Generation for Dialogue State Tracking

1 code implementation EMNLP (NLP4ConvAI) 2021 Xin Tian, Liankai Huang, Yingzhan Lin, Siqi Bao, Huang He, Yunyi Yang, Hua Wu, Fan Wang, Shuqi Sun

In this paper, we propose a novel Amendable Generation for Dialogue State Tracking (AG-DST), which contains a two-pass generation process: (1) generating a primitive dialogue state based on the dialogue of the current turn and the previous dialogue state, and (2) amending the primitive dialogue state from the first pass.

Dialogue State Tracking Multi-domain Dialogue State Tracking +1

SgSum: Transforming Multi-document Summarization into Sub-graph Selection

1 code implementation25 Oct 2021 Moye Chen, Wei Li, Jiachen Liu, Xinyan Xiao, Hua Wu, Haifeng Wang

Comparing with traditional methods, our method has two main advantages: (1) the relations between sentences are captured by modeling both the graph structure of the whole document set and the candidate sub-graphs; (2) directly outputs an integrate summary in the form of sub-graph which is more informative and coherent.

Document Summarization Multi-Document Summarization

Building Chinese Biomedical Language Models via Multi-Level Text Discrimination

1 code implementation14 Oct 2021 Quan Wang, Songtai Dai, Benfeng Xu, Yajuan Lyu, Yong Zhu, Hua Wu, Haifeng Wang

In this work we introduce eHealth, a Chinese biomedical PLM built from scratch with a new pre-training framework.

Domain Adaptation

ERNIE-SPARSE: Robust Efficient Transformer Through Hierarchically Unifying Isolated Information

no code implementations29 Sep 2021 Yang Liu, Jiaxiang Liu, Yuxiang Lu, Shikun Feng, Yu Sun, Zhida Feng, Li Chen, Hao Tian, Hua Wu, Haifeng Wang

The first factor is information bottleneck sensitivity, which is caused by the key feature of Sparse Transformer — only a small number of global tokens can attend to all other tokens.

Text Classification

Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Generalized Tasks

no code implementations29 Sep 2021 Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Yang Cao, Yu Kang, Haifeng Wang

While artificial neural networks (ANNs) have been widely adopted in machine learning, researchers are increasingly obsessed by the gaps between ANNs and natural neural networks (NNNs).

Meta-Learning

PLATO-XL: Exploring the Large-scale Pre-training of Dialogue Generation

3 code implementations20 Sep 2021 Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Zhihua Wu, Zhen Guo, Hua Lu, Xinxian Huang, Xin Tian, Xinchao Xu, Yingzhan Lin, ZhengYu Niu

To explore the limit of dialogue generation pre-training, we present the models of PLATO-XL with up to 11 billion parameters, trained on both Chinese and English social media conversations.

Dialogue Generation

DuRecDial 2.0: A Bilingual Parallel Corpus for Conversational Recommendation

no code implementations EMNLP 2021 Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che

In this paper, we provide a bilingual parallel human-to-human recommendation dialog dataset (DuRecDial 2. 0) to enable researchers to explore a challenging task of multilingual and cross-lingual conversational recommendation.

A Multimodal Sentiment Dataset for Video Recommendation

no code implementations17 Sep 2021 Hongxuan Tang, Hao liu, Xinyan Xiao, Hua Wu

Based on this, we propose a multimodal sentiment analysis dataset, named baiDu Video Sentiment dataset (DuVideoSenti), and introduce a new sentiment system which is designed to describe the sentimental style of a video on recommendation scenery.

Multimodal Sentiment Analysis Video Understanding

Controllable Dialogue Generation with Disentangled Multi-grained Style Specification and Attribute Consistency Reward

no code implementations14 Sep 2021 Zhe Hu, Zhiwei Cao, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Jinsong Su, Hua Wu

Controllable text generation is an appealing but challenging task, which allows users to specify particular attributes of the generated outputs.

Dialogue Generation Response Generation

Fine-grained Entity Typing via Label Reasoning

no code implementations EMNLP 2021 Qing Liu, Hongyu Lin, Xinyan Xiao, Xianpei Han, Le Sun, Hua Wu

Conventional entity typing approaches are based on independent classification paradigms, which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity types.

Entity Typing

Mixup Decoding for Diverse Machine Translation

no code implementations Findings (EMNLP) 2021 Jicheng Li, Pengzhi Gao, Xuanfu Wu, Yang Feng, Zhongjun He, Hua Wu, Haifeng Wang

To further improve the faithfulness and diversity of the translations, we propose two simple but effective approaches to select diverse sentence pairs in the training corpus and adjust the interpolation weight for each pair correspondingly.

Machine Translation Translation

Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning

no code implementations8 Sep 2021 Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Jie Fu, Yang Cao, Yu Kang, Haifeng Wang

In contrast, biological neural networks (BNNs) can adapt to various new tasks by continually updating their connection weights based on their observations, which is aligned with the paradigm of learning effective learning rules in addition to static parameters, e. g., meta-learning.

Meta-Learning

DuTrust: A Sentiment Analysis Dataset for Trustworthiness Evaluation

no code implementations30 Aug 2021 Lijie Wang, Hao liu, Shuyuan Peng, Hongxuan Tang, Xinyan Xiao, Ying Chen, Hua Wu, Haifeng Wang

Therefore, in order to systematically evaluate the factors for building trustworthy systems, we propose a novel and well-annotated sentiment analysis dataset to evaluate robustness and interpretability.

Sentiment Analysis

Discovering Dialog Structure Graph for Coherent Dialog Generation

no code implementations ACL 2021 Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che

Learning discrete dialog structure graph from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation.

ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression

1 code implementation4 Jun 2021 Weiyue Su, Xuyi Chen, Shikun Feng, Jiaxiang Liu, Weixin Liu, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Specifically, the first stage, General Distillation, performs distillation with guidance from pretrained teacher, gerenal data and latent distillation loss.

Knowledge Distillation Pretrained Language Models

BASS: Boosting Abstractive Summarization with Unified Semantic Graph

no code implementations ACL 2021 Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Ziqiang Cao, Sujian Li, Hua Wu, Haifeng Wang

Abstractive summarization for long-document or multi-document remains challenging for the Seq2Seq architecture, as Seq2Seq is not good at analyzing long-distance relations in text.

Abstractive Text Summarization Document Summarization +1

A Unified Pre-training Framework for Conversational AI

1 code implementation6 May 2021 Siqi Bao, Bingjin Chen, Huang He, Xin Tian, Han Zhou, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Yingzhan Lin

In this work, we explore the application of PLATO-2 on various dialogue systems, including open-domain conversation, knowledge grounded dialogue, and task-oriented conversation.

Chatbot Interactive Evaluation of Dialog +1

Learning to Select External Knowledge with Multi-Scale Negative Sampling

1 code implementation3 Feb 2021 Huang He, Hua Lu, Siqi Bao, Fan Wang, Hua Wu, ZhengYu Niu, Haifeng Wang

The Track-1 of DSTC9 aims to effectively answer user requests or questions during task-oriented dialogues, which are out of the scope of APIs/DB.

Response Generation

Knowledge Distillation based Ensemble Learning for Neural Machine Translation

no code implementations1 Jan 2021 Chenze Shao, Meng Sun, Yang Feng, Zhongjun He, Hua Wu, Haifeng Wang

Under this framework, we introduce word-level ensemble learning and sequence-level ensemble learning for neural machine translation, where sequence-level ensemble learning is capable of aggregating translation models with different decoding strategies.

Ensemble Learning Knowledge Distillation +2

Discovering Dialog Structure Graph for Open-Domain Dialog Generation

no code implementations31 Dec 2020 Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu

Learning interpretable dialog structure from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation.

Open-Domain Dialog

ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora

2 code implementations EMNLP 2021 Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance.

Translation

Two-dimensional ferromagnetic semiconductor VBr3 with tunable anisotropy

no code implementations20 Aug 2020 Lu Liu, Ke Yang, Guangyu Wang, Hua Wu

Two-dimensional (2D) ferromagnets (FMs) have attracted widespread attention due to their prospects in spintronic applications.

Materials Science Strongly Correlated Electrons

Conversational Graph Grounded Policy Learning for Open-Domain Conversation Generation

no code implementations ACL 2020 Jun Xu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu

To address the challenge of policy learning in open-domain multi-turn conversation, we propose to represent prior information about dialog transitions as a graph and learn a graph grounded dialog policy, aimed at fostering a more coherent and controllable dialog.

Response Generation

Leveraging Graph to Improve Abstractive Multi-Document Summarization

1 code implementation ACL 2020 Wei Li, Xinyan Xiao, Jiachen Liu, Hua Wu, Haifeng Wang, Junping Du

Graphs that capture relations between textual units have great benefits for detecting salient information from multiple documents and generating overall coherent summaries.

Document Summarization Multi-Document Summarization

SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis

5 code implementations ACL 2020 Hao Tian, Can Gao, Xinyan Xiao, Hao liu, Bolei He, Hua Wu, Haifeng Wang, Feng Wu

In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair.

Multi-Label Classification Sentiment Analysis

Towards Conversational Recommendation over Multi-Type Dialogs

1 code implementation ACL 2020 Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu

We propose a new task of conversational recommendation over multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e. g., QA) to a recommendation dialog, taking into account user's interests and feedback.

Exploring Contextual Word-level Style Relevance for Unsupervised Style Transfer

1 code implementation ACL 2020 Chulun Zhou, Liang-Yu Chen, Jiachen Liu, Xinyan Xiao, Jinsong Su, Sheng Guo, Hua Wu

Unsupervised style transfer aims to change the style of an input sentence while preserving its original content without using parallel training data.

Denoising Style Transfer

ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation

4 code implementations26 Jan 2020 Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.

 Ranked #1 on Text Summarization on GigaWord-10k (using extra training data)

Abstractive Text Summarization Dialogue Generation +2

Synchronous Speech Recognition and Speech-to-Text Translation with Interactive Decoding

no code implementations16 Dec 2019 Yuchen Liu, Jiajun Zhang, Hao Xiong, Long Zhou, Zhongjun He, Hua Wu, Haifeng Wang, Cheng-qing Zong

Speech-to-text translation (ST), which translates source language speech into target language text, has attracted intensive attention in recent years.

Automatic Speech Recognition Multi-Task Learning +2

CoKE: Contextualized Knowledge Graph Embedding

2 code implementations6 Nov 2019 Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, Hua Wu

This work presents Contextualized Knowledge Graph Embedding (CoKE), a novel paradigm that takes into account such contextual nature, and learns dynamic, flexible, and fully contextualized entity and relation embeddings.

Knowledge Graph Embedding Link Prediction

Enhancing Local Feature Extraction with Global Representation for Neural Text Classification

no code implementations IJCNLP 2019 Guocheng Niu, Hengru Xu, Bolei He, Xinyan Xiao, Hua Wu, Sheng Gao

For text classification, traditional local feature driven models learn long dependency by deeply stacking or hybrid modeling.

General Classification Text Classification

D-NET: A Pre-Training and Fine-Tuning Framework for Improving the Generalization of Machine Reading Comprehension

no code implementations WS 2019 Hongyu Li, Xiyuan Zhang, Yibing Liu, Yiming Zhang, Quan Wang, Xiangyang Zhou, Jing Liu, Hua Wu, Haifeng Wang

In this paper, we introduce a simple system Baidu submitted for MRQA (Machine Reading for Question Answering) 2019 Shared Task that focused on generalization of machine reading comprehension (MRC) models.

Machine Reading Comprehension Multi-Task Learning +1

Multi-agent Learning for Neural Machine Translation

no code implementations IJCNLP 2019 Tianchi Bi, Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang

Conventional Neural Machine Translation (NMT) models benefit from the training with an additional agent, e. g., dual learning, and bidirectional decoding with one agent decoding from left to right and the other decoding in the opposite direction.

Machine Translation Translation

Baidu Neural Machine Translation Systems for WMT19

no code implementations WS 2019 Meng Sun, Bojian Jiang, Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang

In this paper we introduce the systems Baidu submitted for the WMT19 shared task on Chinese{\textless}-{\textgreater}English news translation.

Data Augmentation Domain Adaptation +4

ERNIE 2.0: A Continual Pre-training Framework for Language Understanding

3 code implementations29 Jul 2019 Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, Haifeng Wang

Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.

Chinese Named Entity Recognition Chinese Reading Comprehension +9

ARNOR: Attention Regularization based Noise Reduction for Distant Supervision Relation Classification

no code implementations ACL 2019 Wei Jia, Dai Dai, Xinyan Xiao, Hua Wu

In this paper, we propose ARNOR, a novel Attention Regularization based NOise Reduction framework for distant supervision relation classification.

Classification General Classification +1

Proactive Human-Machine Conversation with Explicit Conversation Goal

no code implementations ACL 2019 Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, Haifeng Wang

Konv enables a very challenging task as the model needs to both understand dialogue and plan over the given knowledge graph.

Proactive Human-Machine Conversation with Explicit Conversation Goals

6 code implementations13 Jun 2019 Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, Haifeng Wang

DuConv enables a very challenging task as the model needs to both understand dialogue and plan over the given knowledge graph.

Generating Multiple Diverse Responses with Multi-Mapping and Posterior Mapping Selection

1 code implementation5 Jun 2019 Chaotao Chen, Jinhua Peng, Fan Wang, Jun Xu, Hua Wu

In this paper, we propose a multi-mapping mechanism to better capture the one-to-many relationship, where multiple mapping modules are employed as latent mechanisms to model the semantic mappings from an input post to its diverse responses.

Know More about Each Other: Evolving Dialogue Strategy via Compound Assessment

1 code implementation ACL 2019 Siqi Bao, Huang He, Fan Wang, Rongzhong Lian, Hua Wu

In this paper, a novel Generation-Evaluation framework is developed for multi-turn conversations with the objective of letting both participants know more about each other.

Informativeness reinforcement-learning

End-to-End Speech Translation with Knowledge Distillation

no code implementations17 Apr 2019 Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, Cheng-qing Zong

End-to-end speech translation (ST), which directly translates from source language speech into target language text, has attracted intensive attentions in recent years.

Knowledge Distillation Speech Recognition +1

Knowledge Aware Conversation Generation with Explainable Reasoning over Augmented Graphs

1 code implementation IJCNLP 2019 Zhibin Liu, Zheng-Yu Niu, Hua Wu, Haifeng Wang

Two types of knowledge, triples from knowledge graphs and texts from documents, have been studied for knowledge aware open-domain conversation generation, in which graph paths can narrow down vertex candidates for knowledge selection decision, and texts can provide rich information for response generation.

Knowledge Graphs Machine Reading Comprehension +1

Learning to Select Knowledge for Response Generation in Dialog Systems

1 code implementation13 Feb 2019 Rongzhong Lian, Min Xie, Fan Wang, Jinhua Peng, Hua Wu

Specifically, a posterior distribution over knowledge is inferred from both utterances and responses, and it ensures the appropriate selection of knowledge during the training process.

Response Generation

STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework

3 code implementations ACL 2019 Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, Haifeng Wang

Simultaneous translation, which translates sentences before they are finished, is useful in many scenarios but is notoriously difficult due to word-order differences.

Translation

Addressing Troublesome Words in Neural Machine Translation

no code implementations EMNLP 2018 Yang Zhao, Jiajun Zhang, Zhongjun He, Cheng-qing Zong, Hua Wu

One of the weaknesses of Neural Machine Translation (NMT) is in handling lowfrequency and ambiguous words, which we refer as troublesome words.

Machine Translation Translation

Familia: A Configurable Topic Modeling Framework for Industrial Text Engineering

1 code implementation11 Aug 2018 Di Jiang, Yuanfeng Song, Rongzhong Lian, Siqi Bao, Jinhua Peng, Huang He, Hua Wu

In order to relieve burdens of software engineers without knowledge of Bayesian networks, Familia is able to conduct automatic parameter inference for a variety of topic models.

Topic Models

Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification

no code implementations ACL 2018 Yizhong Wang, Kai Liu, Jing Liu, wei he, Yajuan Lyu, Hua Wu, Sujian Li, Haifeng Wang

Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine.

Machine Reading Comprehension Question Answering

A New Method of Region Embedding for Text Classification

1 code implementation ICLR 2018 chao qiao, Bo Huang, guocheng niu, daren li, daxiang dong, wei he, dianhai yu, Hua Wu

In this paper, we propose a new method of learning and utilizing task-specific distributed representations of n-grams, referred to as “region embeddings”.

Classification General Classification +1

Multi-channel Encoder for Neural Machine Translation

no code implementations6 Dec 2017 Hao Xiong, Zhongjun He, Xiaoguang Hu, Hua Wu

This design of encoder yields relatively uniform composition on source sentence, despite the gating mechanism employed in encoding RNN.

Machine Translation Translation

DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications

3 code implementations WS 2018 Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yu-An Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, Haifeng Wang

Experiments show that human performance is well above current state-of-the-art baseline systems, leaving plenty of room for the community to make improvements.

Machine Reading Comprehension

Latent Topic Embedding

no code implementations COLING 2016 Di Jiang, Lei Shi, Rongzhong Lian, Hua Wu

Topic modeling and word embedding are two important techniques for deriving latent semantics from data.

Topic Models Word Embeddings

Semi-Supervised Learning for Neural Machine Translation

no code implementations ACL 2016 Yong Cheng, Wei Xu, Zhongjun He, wei he, Hua Wu, Maosong Sun, Yang Liu

While end-to-end neural machine translation (NMT) has made remarkable progress recently, NMT systems only rely on parallel corpora for parameter estimation.

Machine Translation Translation

Question Answering over Knowledge Base with Neural Attention Combining Global Knowledge Information

no code implementations3 Jun 2016 Yuanzhe Zhang, Kang Liu, Shizhu He, Guoliang Ji, Zhanyi Liu, Hua Wu, Jun Zhao

With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important.

Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.