In this paper, we address unsupervised chunking as a new task of syntactic structure induction, which is helpful for understanding the linguistic structures of human languages as well as processing low-resource languages.
Document-level relation extraction (DocRE) aims to determine the relation between two entities from a document of multiple sentences.
Ranked #3 on Dialog Relation Extraction on DialogRE (F1c (v1) metric)
In previous work, researchers have developed several calibration methods to post-process the outputs of a predictor to obtain calibrated values, such as binning and scaling methods.
To this end, we propose a search-and-learning approach that leverages pretrained language models but inserts the missing slots to improve the semantic coverage.
How do we perform efficient inference while retaining high translation quality?
The key idea is to integrate powerful neural networks into metaheuristics (e. g., simulated annealing, SA) to restrict the search space in discrete optimization.
Explicitly modeling emotions in dialogue generation has important applications, such as building empathetic personal companions.
In this work, we address the explainability for NLI by weakly supervised logical reasoning, and propose an Explainable Phrasal Reasoning (EPR) approach.
Our two unsupervised methods refine sense annotations produced by a knowledge-based WSD system via lexical translations in a parallel corpus.
Multi-label emotion classification is an important task in NLP and is essential to many applications.
Existing graph neural networks (GNNs) largely rely on node embeddings, which represent a node as a vector by its identity, type, or content.
Conventional approaches for formality style transfer borrow models from neural machine translation, which typically requires massive parallel data for training.
We present a novel iterative, edit-based approach to unsupervised sentence simplification.
Ranked #5 on Text Simplification on Newsela
Automatic sentence summarization produces a shorter version of a sentence, while preserving its most important information.
TreeGen outperformed the previous state-of-the-art approach by 4. 5 percentage points on HearthStone, and achieved the best accuracy among neural network-based approaches on ATIS (89. 1%) and GEO (89. 6%).
Generating relevant responses in a dialog is challenging, and requires not only proper modeling of context in the conversation but also being able to generate fluent sentences during inference.
Moreover, we can train our model on relatively small datasets and learn the latent representation of a specified class by adding external data with other styles/classes to our dataset.
Formality text style transfer plays an important role in various NLP applications, such as non-native speaker assistants and child education.
Unsupervised paraphrase generation is a promising and important research topic in natural language processing.
In this paper, we propose to generate sentences from disentangled syntactic and semantic spaces.
In our work, we propose an imitation learning approach to unsupervised parsing, where we transfer the syntactic knowledge induced by the PRPN to a Tree-LSTM model with discrete parsing actions.
In the natural language processing literature, neural networks are becoming increasingly deeper and complex.
Ranked #48 on Sentiment Analysis on SST-2 Binary classification
In this paper, we propose a grammar-based structural convolutional neural network (CNN) for code generation.
In real-world applications of natural language generation, there are often constraints on the target sentences in addition to fluency and naturalness requirements.
The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the number of parameters, and thus the model capacity.
This paper tackles the problem of disentangling the latent variables of style and content in language models.
Both the classification result and when to make the classification are part of the decision process, which is controlled by a policy network and trained with reinforcement learning.
The variational autoencoder (VAE) imposes a probabilistic distribution (typically Gaussian) on the latent space and penalizes the Kullback--Leibler (KL) divergence between the posterior and prior.
The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network.
This paper addresses the question: Why do neural dialog systems generate short and meaningless replies?
The Past and Future contents are fed to both the attention model and the decoder states, which offers NMT systems the knowledge of translated and untranslated contents.
Existing neural conversational models process natural language primarily on a lexico-syntactic level, thereby ignoring one of the most crucial components of human-to-human dialogue: its affective content.
Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.
Neural network-based dialog systems are attracting increasing attention in both academia and industry.
Generative conversational systems are attracting increasing attention in natural language processing (NLP).
Open-domain human-computer conversation has been attracting increasing attention over the past few years.
In human-computer conversation systems, the context of a user-issued utterance is particularly important because it provides useful background information of the conversation.
Such approaches are time- and memory-intensive because of the large numbers of parameters for word embeddings and the output layer.
Using neural networks to generate replies in human-computer dialogue systems is attracting increasing attention over the past few years.
In this paper, we propose StalemateBreaker, a conversation system that can proactively introduce new content when appropriate.
Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain.
However, existing neural networks for relation classification are usually of shallow architectures (e. g., one-layer convolutional neural networks or recurrent networks).
Ranked #2 on Relation Classification on SemEval 2010 Task 8
In this paper, we propose the TBCNN-pair model to recognize entailment and contradiction between two sentences.
Ranked #85 on Natural Language Inference on SNLI
Provided a specific word, we use RNNs to generate previous words and future words, either simultaneously or asynchronously, resulting in two model variants.
This paper envisions an end-to-end program generation scenario using recurrent neural networks (RNNs): Users can express their intention in natural language; an RNN then automatically generates corresponding code in a characterby-by-character fashion.
This paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP.
Relation classification is an important research arena in the field of natural language processing (NLP).
Ranked #4 on Relation Classification on SemEval 2010 Task 8
Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new research topic, as lightweight neural networks with high performance are particularly in need in various resource-restricted systems.
This paper proposes a tree-based convolutional neural network (TBCNN) for discriminative sentence modeling.
Ranked #6 on Text Classification on TREC-6
Programming language processing (similar to natural language processing) is a hot research topic in the field of software engineering; it has also aroused growing interest in the artificial intelligence community.
In this pioneering paper, we propose the "coding criterion" to build program vector representations, which are the premise of deep learning for program analysis.