DeCLUTR is an approach for learning universal sentence embeddings that utilizes a self-supervised objective that does not require labelled training data. The objective learns universal sentence embeddings by training an encoder to minimize the distance between the embeddings of textual segments randomly sampled from nearby in the same document.
Source: DeCLUTR: Deep Contrastive Learning for Unsupervised Textual RepresentationsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Sentence | 3 | 21.43% |
Clustering | 2 | 14.29% |
Sentence Embeddings | 2 | 14.29% |
Zero-Shot Learning | 1 | 7.14% |
Classification | 1 | 7.14% |
General Classification | 1 | 7.14% |
Sentence Embedding | 1 | 7.14% |
Text Classification | 1 | 7.14% |
Metric Learning | 1 | 7.14% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |