COLA is a self-supervised pre-training approach for learning a general-purpose representation of audio. It is based on contrastive learning: it learns a representation which assigns high similarity to audio segments extracted from the same recording while assigning lower similarity to segments from different recordings.
Source: Contrastive Learning of General-Purpose Audio RepresentationsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Linguistic Acceptability | 3 | 6.67% |
Language Modelling | 2 | 4.44% |
Sentence | 2 | 4.44% |
Autonomous Driving | 2 | 4.44% |
Semantic Segmentation | 2 | 4.44% |
Self-Supervised Learning | 2 | 4.44% |
Topological Data Analysis | 2 | 4.44% |
Large Language Model | 1 | 2.22% |
Semantic Similarity | 1 | 2.22% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |