Sentence Similarity

65 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus

imatge-upc/vqa-2016-cvprw ACL 2016

Over the past decade, large-scale supervised learning corpora have enabled machine learning researchers to make substantial advances.

IISCNLP at SemEval-2016 Task 2: Interpretable STS with ILP based Multiple Chunk Aligner

lavanyats/iMATCH 4 May 2016

Interpretable semantic textual similarity (iSTS) task adds a crucial explanatory layer to pairwise sentence similarity.

Neural Paraphrase Generation with Stacked Residual LSTM Networks

pushpendughosh/Stock-market-forecasting COLING 2016

To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation.

Learning Neural Word Salience Scores

bollegala/repseval SEMEVAL 2018

Measuring the salience of a word is an essential step in numerous NLP tasks.

Transparent, Efficient, and Robust Word Embedding Access with WOMBAT

nlpAThits/WOMBAT COLING 2018

We present WOMBAT, a Python tool which supports NLP practitioners in accessing word embeddings from code.

Retrieval-Based Neural Code Generation

sweetpeach/ReCode EMNLP 2018

In models to generate program source code from natural language, representing this code in a tree structure has been a common approach.

Fixing Translation Divergences in Parallel Corpora for Neural MT

jmcrego/similarity EMNLP 2018

Corpus-based approaches to machine translation rely on the availability of clean parallel corpora.

Evaluating Composition Models for Verb Phrase Elliptical Sentence Embeddings

gijswijnholds/compdisteval-ellipsis NAACL 2019

Our results show that non-linear addition and a non-linear tensor-based composition outperform the naive non-compositional baselines and the linear models, and that sentence encoders perform well on sentence similarity, but not on verb disambiguation.

Natural Language Generation for Effective Knowledge Distillation

castorini/d-bert WS 2019

Knowledge distillation can effectively transfer knowledge from BERT, a deep language representation model, to traditional, shallow word embedding-based neural networks, helping them approach or exceed the quality of other heavyweight language representation models.

A Divide-and-Conquer Approach to the Summarization of Long Documents

AlexGidiotis/DANCER-summ 13 Apr 2020

With this approach we can decompose the problem of long document summarization into smaller and simpler problems, reducing computational complexity and creating more training examples, which at the same time contain less noise in the target summaries compared to the standard approach.