Vision and Language Pre-Trained Models

In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. Our models and code are available at https://github.com/FlagAI-Open/FlagAI.

Source: AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Classification 2 16.67%
Language Modelling 1 8.33%
Blocking 1 8.33%
Concept Alignment 1 8.33%
Cross-Modal Retrieval 1 8.33%
Image Retrieval 1 8.33%
Image-to-Text Retrieval 1 8.33%
Text-to-Image Generation 1 8.33%
XLM-R 1 8.33%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories