In this paper, we propose a novel approach for detecting humor in short texts based on the general linguistic structure of humor.
The training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence.
The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector's dimensionality on the resulting representations.
Despite the fast developmental pace of new sentence embedding methods, it is still challenging to find comprehensive evaluations of these different techniques.
Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.
Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively.
Ranked #35 on Natural Language Inference on SNLI
Here, we generalize the concept of average word embeddings to power mean word embeddings.
Ham achieves a state-of-the-art BLEU score of 0. 26 on Chinese poem generation task and a nearly 6. 5% averaged improvement compared with the existing machine reading comprehension models such as BIDAF and Match-LSTM.