For complex parsing tasks, the state-of-the-art method is based on autoregressive sequence to sequence models to generate the parse directly.
Asking questions from natural language text has attracted increasing attention recently, and several schemes have been proposed with promising results by asking the right question words and copy relevant words from the input to the question.
Variational Autoencoder (VAE) is widely used as a generative model to approximate a model's posterior on latent variables by combining the amortized variational inference and deep neural networks.
We revise the attention distribution to focus on the local and contextual semantic information by incorporating the relative position information between utterances.
Taking an answer and its context as input, sequence-to-sequence models have made considerable progress on question generation.
In this paper, we propose a novel way called GraphBTM to represent biterms as graphs and design a Graph Convolutional Networks (GCNs) with residual connections to extract transitive features from biterms.
In this paper, we review recent findings on adversarial examples for deep neural networks, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods.
Existing malware detectors on safety-critical devices have difficulties in runtime detection due to the performance overhead.
Our text detector achieves an F-measure of 77% on the ICDAR 2015 bench- mark, advancing the state-of-the-art results in [18, 28].
Ranked #4 on Scene Text Detection on COCO-Text