Search Results for author: Bo-Wen Zhou

Found 60 papers, 17 papers with code

Multimodal Joint Attribute Prediction and Value Extraction for E-commerce Product

1 code implementation EMNLP 2020 Tiangang Zhu, Yue Wang, Haoran Li, Youzheng Wu, Xiaodong He, Bo-Wen Zhou

We annotate a multimodal product attribute value dataset that contains 87, 194 instances, and the experimental results on this dataset demonstrate that explicitly modeling the relationship between attributes and values facilitates our method to establish the correspondence between them, and selectively utilizing visual product information is necessary for the task.

Attribute Value Extraction

Self-Attention Guided Copy Mechanism for Abstractive Summarization

no code implementations ACL 2020 Song Xu, Haoran Li, Peng Yuan, Youzheng Wu, Xiaodong He, Bo-Wen Zhou

Copy module has been widely equipped in the recent abstractive summarization models, which facilitates the decoder to extract words from the source into the summary.

Abstractive Text Summarization

Speaker Diarization with Lexical Information

no code implementations13 Apr 2020 Tae Jin Park, Kyu J. Han, Jing Huang, Xiaodong He, Bo-Wen Zhou, Panayiotis Georgiou, Shrikanth Narayanan

This work presents a novel approach for speaker diarization to leverage lexical information provided by automatic speech recognition.

Speaker Diarization Speech Recognition

Graph Sequential Network for Reasoning over Sequences

no code implementations4 Apr 2020 Ming Tu, Jing Huang, Xiaodong He, Bo-Wen Zhou

We validate the proposed GSN on two NLP tasks: interpretable multi-hop reading comprehension on HotpotQA and graph based fact verification on FEVER.

Fact Verification Machine Reading Comprehension +1

Orthogonal Relation Transforms with Graph Context Modeling for Knowledge Graph Embedding

no code implementations ACL 2020 Yun Tang, Jing Huang, Guangtao Wang, Xiaodong He, Bo-Wen Zhou

Translational distance-based knowledge graph embedding has shown progressive improvements on the link prediction task, from TransE to the latest state-of-the-art RotatE.

Knowledge Graph Embedding Link Prediction

Select, Answer and Explain: Interpretable Multi-hop Reading Comprehension over Multiple Documents

no code implementations1 Nov 2019 Ming Tu, Kevin Huang, Guangtao Wang, Jing Huang, Xiaodong He, Bo-Wen Zhou

Interpretable multi-hop reading comprehension (RC) over multiple documents is a challenging problem because it demands reasoning over multiple information sources and explaining the answer prediction by providing supporting evidences.

Learning-To-Rank Multi-Hop Reading Comprehension +1

Relation Module for Non-answerable Prediction on Question Answering

no code implementations23 Oct 2019 Kevin Huang, Yun Tang, Jing Huang, Xiaodong He, Bo-Wen Zhou

In this paper, we aim to improve a MRC model's ability to determine whether a question has an answer in a given context (e. g. the recently proposed SQuAD 2. 0 task).

Machine Reading Comprehension Question Answering

Zero-shot Text-to-SQL Learning with Auxiliary Task

1 code implementation29 Aug 2019 Shuaichen Chang, PengFei Liu, Yun Tang, Jing Huang, Xiaodong He, Bo-Wen Zhou

Recent years have seen great success in the use of neural seq2seq models on the text-to-SQL task.


Multiple instance learning with graph neural networks

no code implementations12 Jun 2019 Ming Tu, Jing Huang, Xiaodong He, Bo-Wen Zhou

In this paper, we propose a new end-to-end graph neural network (GNN) based algorithm for MIL: we treat each bag as a graph and use GNN to learn the bag embedding, in order to explore the useful structural information among instances in bags.

Multiple Instance Learning

Reliable Weakly Supervised Learning: Maximize Gain and Maintain Safeness

no code implementations22 Apr 2019 Lan-Zhe Guo, Yu-Feng Li, Ming Li, Jin-Feng Yi, Bo-Wen Zhou, Zhi-Hua Zhou

We guide the optimization of label quality through a small amount of validation data, and to ensure the safeness of performance while maximizing performance gain.

Few-shot Learning with Meta Metric Learners

no code implementations26 Jan 2019 Yu Cheng, Mo Yu, Xiaoxiao Guo, Bo-Wen Zhou

Our meta metric learning approach consists of task-specific learners, that exploit metric learning to handle flexible labels, and a meta learner, that discovers good parameters and gradient decent to specify the metrics in task-specific learners.

Few-Shot Learning Metric Learning

End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion

1 code implementation11 Nov 2018 Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, Bo-Wen Zhou

The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure.

Knowledge Base Completion Knowledge Graph Embedding +2

Universal Stagewise Learning for Non-Convex Problems with Convergence on Averaged Solutions

no code implementations ICLR 2019 Zaiyi Chen, Zhuoning Yuan, Jin-Feng Yi, Bo-Wen Zhou, Enhong Chen, Tianbao Yang

For example, there is still a lack of theories of convergence for SGD and its variants that use stagewise step size and return an averaged solution in practice.

Scheduled Policy Optimization for Natural Language Communication with Intelligent Agents

3 code implementations16 Jun 2018 Wenhan Xiong, Xiaoxiao Guo, Mo Yu, Shiyu Chang, Bo-Wen Zhou, William Yang Wang

We investigate the task of learning to follow natural language instructions by jointly reasoning with visual observations and language inputs.

Efficient Exploration

Jointly Trained Sequential Labeling and Classification by Sparse Attention Neural Networks

no code implementations28 Sep 2017 Mingbo Ma, Kai Zhao, Liang Huang, Bing Xiang, Bo-Wen Zhou

In order to utilize the potential benefits from their correlations, we propose a jointly trained model for learning the two tasks simultaneously via Long Short-Term Memory (LSTM) networks.

General Classification Intent Classification +3

Learning to update Auto-associative Memory in Recurrent Neural Networks for Improving Sequence Memorization

no code implementations19 Sep 2017 Wei Zhang, Bo-Wen Zhou

Learning to remember long sequences remains a challenging task for recurrent neural networks.

Representation Learning

Robust Task Clustering for Deep Many-Task Learning

no code implementations26 Aug 2017 Mo Yu, Xiaoxiao Guo, Jin-Feng Yi, Shiyu Chang, Saloni Potdar, Gerald Tesauro, Haoyu Wang, Bo-Wen Zhou

We propose a new method to measure task similarities with cross-task transfer performance matrix for the deep learning scenario.

Few-Shot Learning General Classification +4

SenGen: Sentence Generating Neural Variational Topic Model

no code implementations1 Aug 2017 Ramesh Nallapati, Igor Melnyk, Abhishek Kumar, Bo-Wen Zhou

We present a new topic model that generates documents by sampling a topic for one whole sentence at a time, and generating the words in the sentence using an RNN decoder that is conditioned on the topic of the sentence.

Learning Loss Functions for Semi-supervised Learning via Discriminative Adversarial Networks

no code implementations7 Jul 2017 Cicero Nogueira dos Santos, Kahini Wadhawan, Bo-Wen Zhou

We propose discriminative adversarial networks (DAN) for semi-supervised learning and loss function learning.

Neural Models for Sequence Chunking

1 code implementation15 Jan 2017 Feifei Zhai, Saloni Potdar, Bing Xiang, Bo-Wen Zhou

Many natural language understanding (NLU) tasks, such as shallow parsing (i. e., text chunking) and semantic slot filling, require the assignment of representative labels to the meaningful chunks in a sentence.

Chunking Natural Language Understanding +1

GaDei: On Scale-up Training As A Service For Deep Learning

no code implementations18 Nov 2016 Wei Zhang, Minwei Feng, Yunhui Zheng, Yufei Ren, Yandong Wang, Ji Liu, Peng Liu, Bing Xiang, Li Zhang, Bo-Wen Zhou, Fei Wang

By evaluating the NLC workloads, we show that only the conservative hyper-parameter setup (e. g., small mini-batch size and small learning rate) can guarantee acceptable model accuracy for a wide range of customers.

SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents

6 code implementations14 Nov 2016 Ramesh Nallapati, FeiFei Zhai, Bo-Wen Zhou

We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art.

Document Summarization Extractive Summarization

Classify or Select: Neural Architectures for Extractive Document Summarization

no code implementations14 Nov 2016 Ramesh Nallapati, Bo-Wen Zhou, Mingbo Ma

The Selector architecture, on the other hand, is free to pick one sentence at a time in any arbitrary order to piece together the summary.

Document Summarization Extractive Summarization +1

End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension

no code implementations31 Oct 2016 Yang Yu, Wei zhang, Kazi Hasan, Mo Yu, Bing Xiang, Bo-Wen Zhou

This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading comprehension (RC) model that is able to extract and rank a set of answer candidates from a given document to answer questions.

Question Answering Reading Comprehension

Simple Question Answering by Attentive Convolutional Neural Network

no code implementations COLING 2016 Wenpeng Yin, Mo Yu, Bing Xiang, Bo-Wen Zhou, Hinrich Schütze

In fact selection, we match the subject entity in a fact candidate with the entity mention in the question by a character-level convolutional neural network (char-CNN), and match the predicate in that fact with the question by a word-level CNN (word-CNN).

Entity Linking Question Answering

Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation

4 code implementations2 Jun 2016 Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bo-Wen Zhou, Yoshua Bengio, Aaron Courville

We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens.

Dialogue Generation

Pointing the Unknown Words

no code implementations ACL 2016 Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bo-Wen Zhou, Yoshua Bengio

At each time-step, the decision of which softmax layer to use choose adaptively made by an MLP which is conditioned on the context.~We motivate our work from a psychological evidence that humans naturally have a tendency to point towards objects in the context or the environment when the name of an object is not known.~We observe improvements on two tasks, neural machine translation on the Europarl English to French parallel corpora and text summarization on the Gigaword dataset using our proposed model.

Machine Translation Text Summarization

Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond

4 code implementations CONLL 2016 Ramesh Nallapati, Bo-Wen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, Bing Xiang

In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora.

Abstractive Text Summarization Sentence Summarization

Attentive Pooling Networks

3 code implementations11 Feb 2016 Cicero dos Santos, Ming Tan, Bing Xiang, Bo-Wen Zhou

In this work, we propose Attentive Pooling (AP), a two-way attention mechanism for discriminative model training.

Answer Selection Representation Learning

Leveraging Sentence-level Information with Encoder LSTM for Semantic Slot Filling

no code implementations EMNLP 2016 Gakuto Kurata, Bing Xiang, Bo-Wen Zhou, Mo Yu

Recurrent Neural Network (RNN) and one of its specific architectures, Long Short-Term Memory (LSTM), have been widely used for sequence labeling.

Natural Language Understanding Slot Filling

ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs

8 code implementations TACL 2016 Wenpeng Yin, Hinrich Schütze, Bing Xiang, Bo-Wen Zhou

(ii) We propose three attention schemes that integrate mutual influence between sentences into CNN; thus, the representation of each sentence takes into consideration its counterpart.

Answer Selection Natural Language Inference +1

Good, Better, Best: Choosing Word Embedding Context

no code implementations19 Nov 2015 James Cross, Bing Xiang, Bo-Wen Zhou

We propose two methods of learning vector representations of words and phrases that each combine sentence context with structural features extracted from dependency trees.

LSTM-based Deep Learning Models for Non-factoid Answer Selection

1 code implementation12 Nov 2015 Ming Tan, Cicero dos Santos, Bing Xiang, Bo-Wen Zhou

One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework.

Answer Selection

Distributed Deep Learning for Question Answering

no code implementations3 Nov 2015 Minwei Feng, Bing Xiang, Bo-Wen Zhou

This paper is an empirical study of the distributed deep learning for question answering subtasks: answer selection and question classification.

Answer Selection General Classification

Empirical Study on Deep Learning Models for Question Answering

no code implementations26 Oct 2015 Yang Yu, Wei zhang, Chung-Wei Hang, Bing Xiang, Bo-Wen Zhou

In this paper we explore deep learning models with memory component or attention mechanism for question answering task.

Machine Translation Question Answering

Structured Memory for Neural Turing Machines

no code implementations14 Oct 2015 Wei Zhang, Yang Yu, Bo-Wen Zhou

Neural Turing Machines (NTM) contain memory component that simulates "working memory" in the brain to store and retrieve information to ease simple algorithms learning.

Applying Deep Learning to Answer Selection: A Study and An Open Task

3 code implementations7 Aug 2015 Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bo-Wen Zhou

We apply a general deep learning framework to address the non-factoid question answering task.

Answer Selection

Dependency-based Convolutional Neural Networks for Sentence Embedding

1 code implementation IJCNLP 2015 Mingbo Ma, Liang Huang, Bing Xiang, Bo-Wen Zhou

In sentence modeling and classification, convolutional neural network approaches have recently achieved state-of-the-art results, but all such efforts process word vectors sequentially and neglect long-distance dependencies.

General Classification Sentence Embedding

Medical Synonym Extraction with Concept Space Models

no code implementations1 Jun 2015 Chang Wang, Liangliang Cao, Bo-Wen Zhou

In this paper, we present a novel approach for medical synonym extraction.

Classifying Relations by Ranking with Convolutional Neural Networks

2 code implementations IJCNLP 2015 Cicero Nogueira dos Santos, Bing Xiang, Bo-Wen Zhou

Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features.

General Classification Relation Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.