OSCAR is a new learning method that uses object tags detected in images as anchor points to ease the learning of image-text alignment. The model take a triple as input (word-tag-region) and pre-trained with two losses (masked token loss over words and tags, and a contrastive loss between tags and others). OSCAR represents an image-text pair into semantic space via dictionary lookup. Object tags are used as anchor points to align image regions with word embeddings of pre-trained language models. The model is then fine-tuned for understanding and generation tasks.
Source: Oscar: Object-Semantics Aligned Pre-training for Vision-Language TasksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modeling | 5 | 10.64% |
Language Modelling | 5 | 10.64% |
Image Captioning | 3 | 6.38% |
Visual Question Answering (VQA) | 3 | 6.38% |
Benchmarking | 2 | 4.26% |
Question Answering | 2 | 4.26% |
NER | 2 | 4.26% |
Large Language Model | 1 | 2.13% |
Navigate | 1 | 2.13% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |