59 papers with code • 0 benchmarks • 6 datasets
In this paper, we propose a novel approach for detecting humor in short texts based on the general linguistic structure of humor.
The training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence.
The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector's dimensionality on the resulting representations.
Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.
Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively.
Ranked #65 on Natural Language Inference on SNLI
Here, we generalize the concept of average word embeddings to power mean word embeddings.