|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
To learn a semantic parser from denotations, a learning algorithm must search over a combinatorially large space of logical forms for ones consistent with the annotated denotations.
That is because there are usually many noises in the setting of long-form text matching, and it is difficult for existing semantic text matching to capture the key matching signals from this noisy information.
Transformer-based NLP models are trained using hundreds of millions or even billions of parameters, limiting their applicability in computationally constrained environments.
Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences.
Ranked #7 on Question Answering on WikiQA
Over the past decade, large-scale supervised learning corpora have enabled machine learning researchers to make substantial advances.
Research on time-series similarity measures has emphasized the need for elastic methods which align the indices of pairs of time series and a plethora of non-parametric have been proposed for the task.
Modeling the structure of coherent texts is a key NLP problem.