Natural language inference is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".
|A man inspects the uniform of a figure in some East Asian country.||contradiction||The man is sleeping.|
|An older and younger man smiling.||neutral||Two men are smiling and laughing at the cats playing on the floor.|
|A soccer game with multiple males playing.||entailment||Some men are playing a sport.|
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
In this work, we propose an attention mechanism over Tree-LSTMs to learn more meaningful and explainable parse tree structures.
We propose a new diagnostics test suite which allows to assess whether a dataset constitutes a good testbed for evaluating the models' meaning understanding capabilities.
Reasoning about tabular information presents unique challenges to modern NLP approaches which largely rely on pre-trained contextualized embeddings of text.
Most existing methods generate post-hoc explanations for neural network models by identifying individual feature attributions or detecting interactions between adjacent features.
In this paper, we explore to improve pretrained models with label hierarchies on the ZS-MTC task.
To guarantee user acceptability, all the text transformations are linguistically based, and we provide a human evaluation for each one.
In our setup an algorithm is trained on several source domains, and then applied to examples from an unseen domain that is unknown at training time.