Pathology Language and Image Pre-Training (PLIP) is a vision-and-language foundation model created by fine-tuning CLIP on pathology images.
Source: Leveraging medical Twitter to build a visual–language foundation model for pathology AIPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 2 | 15.38% |
Language Modeling | 1 | 7.69% |
Language Modelling | 1 | 7.69% |
Benchmarking | 1 | 7.69% |
Decision Making | 1 | 7.69% |
Image Retrieval | 1 | 7.69% |
Retrieval | 1 | 7.69% |
Zero-Shot Learning | 1 | 7.69% |
Pedestrian Attribute Recognition | 1 | 7.69% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |