no code implementations • EMNLP 2020 • Shen Wang, Xiaokai Wei, Cicero Nogueira dos santos, Zhiguo Wang, Ramesh Nallapati, Andrew Arnold, Bing Xiang, Philip S. Yu
Existing knowledge graph embedding approaches concentrate on modeling symmetry/asymmetry, inversion, and composition typed relations but overlook the hierarchical nature of relations.
no code implementations • 13 Mar 2024 • Ben Athiwaratkun, Shiqi Wang, Mingyue Shang, Yuchen Tian, Zijian Wang, Sujan Kumar Gonugondla, Sanjay Krishna Gouda, Rob Kwiatowski, Ramesh Nallapati, Bing Xiang
Generative models, widely utilized in various applications, can often struggle with prompts corresponding to partial tokens.
no code implementations • 13 Mar 2024 • Ben Athiwaratkun, Sujan Kumar Gonugondla, Sanjay Krishna Gouda, Haifeng Qian, Hantian Ding, Qing Sun, Jun Wang, Jiacheng Guo, Liangfu Chen, Parminder Bhatia, Ramesh Nallapati, Sudipta Sengupta, Bing Xiang
This study introduces bifurcated attention, a method designed to enhance language model inference in shared-context batch decoding scenarios.
no code implementations • 2 Feb 2024 • Dejiao Zhang, Wasi Ahmad, Ming Tan, Hantian Ding, Ramesh Nallapati, Dan Roth, Xiaofei Ma, Bing Xiang
Recent studies have shown that code language models at scale demonstrate significant performance gains on downstream tasks, i. e., code generation.
no code implementations • 10 Aug 2023 • Alexander Hanbo Li, Mingyue Shang, Evangelia Spiliopoulou, Jie Ma, Patrick Ng, Zhiguo Wang, Bonan Min, William Wang, Kathleen McKeown, Vittorio Castelli, Dan Roth, Bing Xiang
We present a novel approach for structured data-to-text generation that addresses the limitations of existing methods that primarily focus on specific types of structured data.
no code implementations • 11 Jul 2023 • Siddhartha Jain, Xiaofei Ma, Anoop Deoras, Bing Xiang
We show strong improvements for selecting the best k generations for code generation tasks as well as robust improvements for the best generation for the tasks of autoformalization, summarization, and translation.
no code implementations • 5 Jul 2023 • Prateek Yadav, Qing Sun, Hantian Ding, Xiaopeng Li, Dejiao Zhang, Ming Tan, Xiaofei Ma, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Mohit Bansal, Bing Xiang
Large-scale code generation models such as Codex and CodeT5 have achieved impressive performance.
no code implementations • 5 Jun 2023 • Hantian Ding, Varun Kumar, Yuchen Tian, Zijian Wang, Rob Kwiatkowski, Xiaopeng Li, Murali Krishna Ramanathan, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang
Large language models trained on code have shown great potential to increase productivity of software developers.
1 code implementation • 31 May 2023 • Chenghao Yang, Fan Yin, He He, Kai-Wei Chang, Xiaofei Ma, Bing Xiang
In practice, Shapley Values are often estimated with a small number of stochastic model evaluations.
no code implementations • 30 May 2023 • Xingyu Fu, Sheng Zhang, Gukyeong Kwon, Pramuditha Perera, Henghui Zhu, Yuhao Zhang, Alexander Hanbo Li, William Yang Wang, Zhiguo Wang, Vittorio Castelli, Patrick Ng, Dan Roth, Bing Xiang
The open-ended Visual Question Answering (VQA) task requires AI models to jointly reason over visual and natural language inputs using world knowledge.
no code implementations • 27 May 2023 • Sijia Wang, Alexander Hanbo Li, Henry Zhu, Sheng Zhang, Chung-Wei Hang, Pramuditha Perera, Jie Ma, William Wang, Zhiguo Wang, Vittorio Castelli, Bing Xiang, Patrick Ng
Entities can be expressed in diverse formats, such as texts, images, or column names and cell values in tables.
1 code implementation • 25 May 2023 • Wuwei Lan, Zhiguo Wang, Anuj Chauhan, Henghui Zhu, Alexander Li, Jiang Guo, Sheng Zhang, Chung-Wei Hang, Joseph Lilien, Yiqun Hu, Lin Pan, Mingwen Dong, Jun Wang, Jiarong Jiang, Stephen Ash, Vittorio Castelli, Patrick Ng, Bing Xiang
A practical text-to-SQL system should generalize well on a wide variety of natural language questions, unseen database schemas, and novel SQL query structures.
no code implementations • 9 Mar 2023 • Xiaokai Wei, Sujan Gonugondla, Wasi Ahmad, Shiqi Wang, Baishakhi Ray, Haifeng Qian, Xiaopeng Li, Varun Kumar, Zijian Wang, Yuchen Tian, Qing Sun, Ben Athiwaratkun, Mingyue Shang, Murali Krishna Ramanathan, Parminder Bhatia, Bing Xiang
Such large models incur significant resource usage (in terms of memory, latency, and dollars) as well as carbon footprint.
no code implementations • 13 Feb 2023 • Danilo Ribeiro, Shen Wang, Xiaofei Ma, Henry Zhu, Rui Dong, Deguang Kong, Juliette Burger, Anjelica Ramos, William Wang, Zhiheng Huang, George Karypis, Bing Xiang, Dan Roth
We introduce STREET, a unified multi-task and multi-domain natural language reasoning and explanation benchmark.
2 code implementations • 21 Jan 2023 • Shuaichen Chang, Jun Wang, Mingwen Dong, Lin Pan, Henghui Zhu, Alexander Hanbo Li, Wuwei Lan, Sheng Zhang, Jiarong Jiang, Joseph Lilien, Steve Ash, William Yang Wang, Zhiguo Wang, Vittorio Castelli, Patrick Ng, Bing Xiang
Neural text-to-SQL models have achieved remarkable performance in translating natural language questions into SQL queries.
1 code implementation • 20 Dec 2022 • Yangruibo Ding, Zijian Wang, Wasi Uddin Ahmad, Murali Krishna Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, Bing Xiang
While pre-trained language models (LM) for code have achieved great success in code completion, they generate code conditioned only on the contents within the file, i. e., in-file context, but ignore the rich semantics in other files within the same project, i. e., cross-file context, a critical source of information that is especially useful in modern modular software development.
2 code implementations • 20 Dec 2022 • Shiqi Wang, Zheng Li, Haifeng Qian, Chenghao Yang, Zijian Wang, Mingyue Shang, Varun Kumar, Samson Tan, Baishakhi Ray, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Dan Roth, Bing Xiang
Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation.
no code implementations • 17 Dec 2022 • Yiyun Zhao, Jiarong Jiang, Yiqun Hu, Wuwei Lan, Henry Zhu, Anuj Chauhan, Alexander Li, Lin Pan, Jun Wang, Chung-Wei Hang, Sheng Zhang, Marvin Dong, Joe Lilien, Patrick Ng, Zhiguo Wang, Vittorio Castelli, Bing Xiang
In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data.
2 code implementations • 26 Oct 2022 • Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, Sujan Kumar Gonugondla, Hantian Ding, Varun Kumar, Nathan Fulton, Arash Farahani, Siddhartha Jain, Robert Giaquinto, Haifeng Qian, Murali Krishna Ramanathan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang
Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings.
1 code implementation • 3 Oct 2022 • Nihal Jain, Dejiao Zhang, Wasi Uddin Ahmad, Zijian Wang, Feng Nan, Xiaopeng Li, Ming Tan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Xiaofei Ma, Bing Xiang
Specifically, we attain $44\%$ relative improvement on the Semantic Textual Similarity tasks and $34\%$ on Code-to-Code Search tasks.
1 code implementation • 30 Sep 2022 • Donghan Yu, Sheng Zhang, Patrick Ng, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Yiqun Hu, William Wang, Zhiguo Wang, Bing Xiang
Question answering over knowledge bases (KBs) aims to answer natural language questions with factual information such as entities and relations in KBs.
no code implementations • 28 Sep 2022 • Jun Wang, Patrick Ng, Alexander Hanbo Li, Jiarong Jiang, Zhiguo Wang, Ramesh Nallapati, Bing Xiang, Sudipta Sengupta
When synthesizing a SQL query, there is no explicit semantic information of NLQ available to the parser which leads to undesirable generalization performance.
no code implementations • 10 Jun 2022 • Sheng Zhang, Patrick Ng, Zhiguo Wang, Bing Xiang
Our generative model is a unified framework to sequentially generate relational triplets under various relation extraction settings and explicitly utilizes relevant knowledge from Knowledge Graph (KG) to resolve ambiguities.
1 code implementation • NAACL 2022 • Zhihan Zhou, Dejiao Zhang, Wei Xiao, Nicholas Dingwall, Xiaofei Ma, Andrew O. Arnold, Bing Xiang
In this paper, we introduce Dialogue Sentence Embedding (DSE), a self-supervised contrastive learning method that learns effective dialogue representations suitable for a wide range of dialogue tasks.
2 code implementations • ACL 2022 • Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew Arnold, Bing Xiang, Dan Roth
Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16. 5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets.
no code implementations • Findings (EMNLP) 2021 • Peng Xu, Xinchi Chen, Xiaofei Ma, Zhiheng Huang, Bing Xiang
In this work, we propose to use a graph attention network on top of the available pretrained Transformers model to learn document embeddings.
no code implementations • 12 Oct 2021 • Peng Xu, Davis Liang, Zhiheng Huang, Bing Xiang
We propose a simple strategy to obtain an extractive answer span from the generative model by leveraging the decoder cross-attention patterns.
no code implementations • 27 Sep 2021 • Zhiheng Huang, Davis Liang, Peng Xu, Bing Xiang
Transformer models, which leverage architectural improvements like self-attention, perform remarkably well on Natural Language Processing (NLP) tasks.
1 code implementation • EMNLP 2021 • Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O. Arnold, Bing Xiang
Many recent successes in sentence representation learning have been achieved by simply fine-tuning on the Natural Language Inference (NLI) datasets with triplet loss or siamese loss.
1 code implementation • ACL 2021 • Alexander Hanbo Li, Patrick Ng, Peng Xu, Henghui Zhu, Zhiguo Wang, Bing Xiang
However, a large amount of world's knowledge is stored in structured databases, and need to be accessed using query languages such as SQL.
no code implementations • 11 May 2021 • Yang Li, Ben Athiwaratkun, Cicero Nogueira dos santos, Bing Xiang
In this work, we propose to leverage the prior information embedded in pretrained language models (LM) to improve generalization for intent classification and slot labeling tasks with limited training data.
1 code implementation • ACL 2021 • Feng Nan, Cicero Nogueira dos santos, Henghui Zhu, Patrick Ng, Kathleen McKeown, Ramesh Nallapati, Dejiao Zhang, Zhiguo Wang, Andrew O. Arnold, Bing Xiang
A commonly observed problem with the state-of-the art abstractive summarization models is that the generated summaries can be factually inconsistent with the input documents.
no code implementations • EMNLP 2021 • Dheeru Dua, Cicero Nogueira dos santos, Patrick Ng, Ben Athiwaratkun, Bing Xiang, Matt Gardner, Sameer Singh
Compositional reasoning tasks like multi-hop question answering, require making latent decisions to get the final answer, given a question.
no code implementations • EACL 2021 • Zhiguo Wang, Patrick Ng, Ramesh Nallapati, Bing Xiang
Experiments show that: (1) Our IR-based retrieval method is able to collect high-quality candidates efficiently, thus enables our method adapt to large-scale KBs easily; (2) the BERT model improves the accuracy across all three sub-tasks; and (3) benefiting from multi-task learning, the unified model obtains further improvements with only 1/3 of the original parameters.
2 code implementations • NAACL 2021 • Dejiao Zhang, Feng Nan, Xiaokai Wei, Shangwen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew Arnold, Bing Xiang
Unsupervised clustering aims at discovering the semantic categories of data according to some distance measured in the representation space.
Ranked #1 on
Short Text Clustering
on AG News
1 code implementation • EACL 2021 • Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, Bing Xiang
A key challenge for abstractive summarization is ensuring factual consistency of the generated summary with respect to the original document.
2 code implementations • ICLR 2021 • Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos santos, Bing Xiang, Stefano Soatto
We propose a new framework, Translation between Augmented Natural Languages (TANL), to solve many structured prediction language tasks including joint entity and relation extraction, nested named entity recognition, relation classification, semantic role labeling, event extraction, coreference resolution, and dialogue state tracking.
Ranked #3 on
Relation Classification
on TACRED
3 code implementations • 18 Dec 2020 • Peng Shi, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos santos, Bing Xiang
Most recently, there has been significant interest in learning contextual representations for various NLP tasks, by leveraging large scale text corpora to train large neural language models with self-supervised learning objectives, such as Masked Language Model (MLM).
Ranked #7 on
Semantic Parsing
on spider
1 code implementation • ACL 2021 • Yifan Gao, Henghui Zhu, Patrick Ng, Cicero Nogueira dos santos, Zhiguo Wang, Feng Nan, Dejiao Zhang, Ramesh Nallapati, Andrew O. Arnold, Bing Xiang
When multiple plausible answers are found, the system should rewrite the question for each answer to resolve the ambiguity.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Dejiao Zhang, Ramesh Nallapati, Henghui Zhu, Feng Nan, Cicero Nogueira dos santos, Kathleen McKeown, Bing Xiang
Unsupervised domain adaptation addresses the problem of leveraging labeled data in a source domain to learn a well-performing model in a target domain where labels are unavailable.
Cross-Lingual Document Classification
Document Classification
+2
1 code implementation • EMNLP 2020 • Siamak Shakeri, Cicero Nogueira dos santos, Henry Zhu, Patrick Ng, Feng Nan, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
Our model comprises a single transformer-based encoder-decoder network that is trained end-to-end to generate both answers and questions.
no code implementations • EMNLP 2020 • Cicero Nogueira dos santos, Xiaofei Ma, Ramesh Nallapati, Zhiheng Huang, Bing Xiang
Generative models for Information Retrieval, where ranking of documents is viewed as the task of generating a query from a document's language model, were very successful in various IR tasks in the past.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zhiheng Huang, Davis Liang, Peng Xu, Bing Xiang
In this paper, we first review absolute position embeddings and existing methods for relative position embeddings.
1 code implementation • 22 Sep 2020 • Davis Liang, Peng Xu, Siamak Shakeri, Cicero Nogueira dos Santos, Ramesh Nallapati, Zhiheng Huang, Bing Xiang
In some cases, our model trained on synthetic data can even outperform the same model trained on real data
no code implementations • EMNLP 2020 • Ben Athiwaratkun, Cicero Nogueira dos santos, Jason Krone, Bing Xiang
We set a new state-of-the-art for few-shot slot labeling, improving substantially upon the previous 5-shot ($75. 0\% \rightarrow 90. 9\%$) and 1-shot ($70. 4\% \rightarrow 81. 0\%$) state-of-the-art results.
no code implementations • 17 Jul 2020 • Parminder Bhatia, Lan Liu, Kristjan Arumae, Nima Pourdamghani, Suyog Deshpande, Ben Snively, Mona Mona, Colby Wise, George Price, Shyam Ramaswamy, Xiaofei Ma, Ramesh Nallapati, Zhiheng Huang, Bing Xiang, Taha Kass-Hout
Coronavirus disease (COVID-19) has been declared as a pandemic by WHO with thousands of cases being reported each day.
1 code implementation • ACL 2020 • Alexander R. Fabbri, Patrick Ng, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
Training a QA model on this data gives a relative improvement over a previous unsupervised model in F1 score on the SQuAD dataset by about 14%, and 20% when the answer is a named entity, achieving state-of-the-art performance on SQuAD for unsupervised QA.
no code implementations • 16 Mar 2020 • Zhiheng Huang, Peng Xu, Davis Liang, Ajay Mishra, Bing Xiang
Prior to the transformer era, bidirectional Long Short-Term Memory (BLSTM) has been the dominant modeling architecture for neural machine translation and question answering.
Ranked #1 on
Text Classification
on GLUE MRPC
1 code implementation • 25 Nov 2019 • Henghui Zhu, Feng Nan, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
In this work, we define the problem of conversation structure modeling as identifying the parent utterance(s) to which each utterance in the conversation responds to.
no code implementations • WS 2019 • Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
The performance of deep neural models can deteriorate substantially when there is a domain shift between training and test data.
no code implementations • 17 Oct 2019 • Xiaofei Ma, Zhiguo Wang, Patrick Ng, Ramesh Nallapati, Bing Xiang
We present a systematic investigation of layer-wise BERT activations for general-purpose text representations to understand what linguistic information they capture and how transferable they are across different tasks.
no code implementations • IJCNLP 2019 • Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, Bing Xiang
To tackle this issue, we propose a multi-passage BERT model to globally normalize answer scores across all passages of the same question, and this change enables our QA model find better answers by utilizing more passages.
Ranked #3 on
Open-Domain Question Answering
on SearchQA
1 code implementation • ACL 2019 • Feng Nan, Ran Ding, Ramesh Nallapati, Bing Xiang
To measure the diversity of the produced topics, we propose a simple topic uniqueness metric.
no code implementations • ICLR Workshop LLD 2019 • Peng Xu, Xiaofei Ma, Ramesh Nallapati, Bing Xiang
In this paper, we propose a \textit{weak supervision} framework for neural ranking tasks based on the data programming paradigm \citep{Ratner2016}, which enables us to leverage multiple weak supervision signals from different sources.
no code implementations • 8 Apr 2019 • Zhiheng Huang, Bing Xiang
In this paper, we propose a novel way of architecture search by means of weighted networks (WeNet), which consist of a number of networks, with each assigned a weight.
no code implementations • CVPR 2019 • Pramuditha Perera, Ramesh Nallapati, Bing Xiang
The key contribution of our work is our proposal to explicitly constrain the latent space to exclusively represent the given class.
Ranked #6 on
Anomaly Detection
on Hyper-Kvasir Dataset
no code implementations • ICLR Workshop LLD 2019 • Ian Gemp, Ramesh Nallapati, Ran Ding, Feng Nan, Bing Xiang
We extend NTMs to the weakly semi-supervised setting by using informative priors in the training objective.
2 code implementations • EMNLP 2018 • Ran Ding, Ramesh Nallapati, Bing Xiang
Topic models are evaluated based on their ability to describe documents well (i. e. low perplexity) and to produce topics that carry coherent semantic meaning.
no code implementations • ACL 2018 • Rui Zhang, Cicero Nogueira dos santos, Michihiro Yasunaga, Bing Xiang, Dragomir Radev
Coreference resolution aims to identify in a text all mentions that refer to the same real-world entity.
no code implementations • ACL 2017 • Mingbo Ma, Liang Huang, Bing Xiang, Bo-Wen Zhou
Question classification is an important task with wide applications.
no code implementations • 28 Sep 2017 • Mingbo Ma, Kai Zhao, Liang Huang, Bing Xiang, Bo-Wen Zhou
In order to utilize the potential benefits from their correlations, we propose a jointly trained model for learning the two tasks simultaneously via Long Short-Term Memory (LSTM) networks.
no code implementations • ACL 2017 • Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, Cicero dos Santos, Bing Xiang, Bo-Wen Zhou
Relation detection is a core component for many NLP applications including Knowledge Base Question Answering (KBQA).
52 code implementations • 9 Mar 2017 • Zhouhan Lin, Minwei Feng, Cicero Nogueira dos santos, Mo Yu, Bing Xiang, Bo-Wen Zhou, Yoshua Bengio
This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention.
1 code implementation • 15 Jan 2017 • Feifei Zhai, Saloni Potdar, Bing Xiang, Bo-Wen Zhou
Many natural language understanding (NLU) tasks, such as shallow parsing (i. e., text chunking) and semantic slot filling, require the assignment of representative labels to the meaningful chunks in a sentence.
no code implementations • 18 Nov 2016 • Wei Zhang, Minwei Feng, Yunhui Zheng, Yufei Ren, Yandong Wang, Ji Liu, Peng Liu, Bing Xiang, Li Zhang, Bo-Wen Zhou, Fei Wang
By evaluating the NLC workloads, we show that only the conservative hyper-parameter setup (e. g., small mini-batch size and small learning rate) can guarantee acceptable model accuracy for a wide range of customers.
no code implementations • 31 Oct 2016 • Yang Yu, Wei zhang, Kazi Hasan, Mo Yu, Bing Xiang, Bo-Wen Zhou
This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading comprehension (RC) model that is able to extract and rank a set of answer candidates from a given document to answer questions.
Ranked #49 on
Question Answering
on SQuAD1.1 dev
no code implementations • COLING 2016 • Wenpeng Yin, Mo Yu, Bing Xiang, Bo-Wen Zhou, Hinrich Schütze
In fact selection, we match the subject entity in a fact candidate with the entity mention in the question by a character-level convolutional neural network (char-CNN), and match the predicate in that fact with the question by a word-level CNN (word-CNN).
4 code implementations • CONLL 2016 • Ramesh Nallapati, Bo-Wen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, Bing Xiang
In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora.
Ranked #10 on
Text Summarization
on DUC 2004 Task 1
3 code implementations • 11 Feb 2016 • Cicero dos Santos, Ming Tan, Bing Xiang, Bo-Wen Zhou
In this work, we propose Attentive Pooling (AP), a two-way attention mechanism for discriminative model training.
Ranked #2 on
Question Answering
on SemEvalCQA
no code implementations • EMNLP 2016 • Gakuto Kurata, Bing Xiang, Bo-Wen Zhou, Mo Yu
Recurrent Neural Network (RNN) and one of its specific architectures, Long Short-Term Memory (LSTM), have been widely used for sequence labeling.
8 code implementations • TACL 2016 • Wenpeng Yin, Hinrich Schütze, Bing Xiang, Bo-Wen Zhou
(ii) We propose three attention schemes that integrate mutual influence between sentences into CNN; thus, the representation of each sentence takes into consideration its counterpart.
no code implementations • 19 Nov 2015 • James Cross, Bing Xiang, Bo-Wen Zhou
We propose two methods of learning vector representations of words and phrases that each combine sentence context with structural features extracted from dependency trees.
2 code implementations • 12 Nov 2015 • Ming Tan, Cicero dos Santos, Bing Xiang, Bo-Wen Zhou
One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework.
no code implementations • 3 Nov 2015 • Minwei Feng, Bing Xiang, Bo-Wen Zhou
This paper is an empirical study of the distributed deep learning for question answering subtasks: answer selection and question classification.
no code implementations • 26 Oct 2015 • Yang Yu, Wei zhang, Chung-Wei Hang, Bing Xiang, Bo-Wen Zhou
In this paper we explore deep learning models with memory component or attention mechanism for question answering task.
2 code implementations • 7 Aug 2015 • Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bo-Wen Zhou
We apply a general deep learning framework to address the non-factoid question answering task.
1 code implementation • IJCNLP 2015 • Mingbo Ma, Liang Huang, Bing Xiang, Bo-Wen Zhou
In sentence modeling and classification, convolutional neural network approaches have recently achieved state-of-the-art results, but all such efforts process word vectors sequentially and neglect long-distance dependencies.
2 code implementations • IJCNLP 2015 • Cicero Nogueira dos Santos, Bing Xiang, Bo-Wen Zhou
Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features.
Ranked #27 on
Relation Extraction
on SemEval-2010 Task-8