text similarity
59 papers with code • 2 benchmarks • 4 datasets
Benchmarks
These leaderboards are used to track progress in text similarity
Trend | Dataset | Best Model | Paper | Code | Compare |
---|
Most implemented papers
Stacked Cross Attention for Image-Text Matching
Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable.
Query-based Attention CNN for Text Similarity Map
This network is composed of compare mechanism, two-staged CNN architecture with attention mechanism, and a prediction layer.
Matching Images and Text with Multi-modal Tensor Fusion and Re-ranking
We propose a novel framework that achieves remarkable matching performance with acceptable model complexity.
HHH: An Online Medical Chatbot System based on Knowledge Graph and Hierarchical Bi-Directional Attention
This paper proposes a chatbot framework that adopts a hybrid model which consists of a knowledge graph and a text similarity model.
Effective Crowd-Annotation of Participants, Interventions, and Outcomes in the Text of Clinical Trial Reports
Obtaining such a corpus from crowdworkers, however, has been shown to be ineffective since (i) workers usually lack domain-specific expertise to conduct the task with sufficient quality, and (ii) the standard approach of annotating entire abstracts of trial reports as one task-instance (i. e. HIT) leads to an uneven distribution in task effort.
ESimCSE: Enhanced Sample Building Method for Contrastive Learning of Unsupervised Sentence Embedding
Unsup-SimCSE takes dropout as a minimal data augmentation method, and passes the same input sentence to a pre-trained Transformer encoder (with dropout turned on) twice to obtain the two corresponding embeddings to build a positive pair.
Smoothed Contrastive Learning for Unsupervised Sentence Embedding
Contrastive learning has been gradually applied to learn high-quality unsupervised sentence embedding.
InfoCSE: Information-aggregated Contrastive Learning of Sentence Embeddings
Contrastive learning has been extensively studied in sentence embedding learning, which assumes that the embeddings of different views of the same sentence are closer.
CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation
However, the problem of transferring these capabilities learned from image-level supervision to the pixel-level task of segmentation and addressing arbitrary unseen categories at inference makes this task challenging.