Search Results for author: Haoxuan Ding

Found 3 papers, 1 papers with code

SamLP: A Customized Segment Anything Model for License Plate Detection

1 code implementation12 Jan 2024 Haoxuan Ding, Junyu Gao, Yuan Yuan, Qi Wang

Meanwhile, the proposed SamLP has great few-shot and zero-shot learning ability, which shows the potential of transferring vision foundation model.

License Plate Detection Zero-Shot Learning

The CLIP Model is Secretly an Image-to-Prompt Converter

no code implementations NeurIPS 2023 Yuxuan Ding, Chunna Tian, Haoxuan Ding, Lingqiao Liu

The Stable Diffusion model is a prominent text-to-image generation model that relies on a text prompt as its input, which is encoded using the Contrastive Language-Image Pre-Training (CLIP).

Image-Variation Text-to-Image Generation

Don't Stop Learning: Towards Continual Learning for the CLIP Model

no code implementations19 Jul 2022 Yuxuan Ding, Lingqiao Liu, Chunna Tian, Jingyuan Yang, Haoxuan Ding

The Contrastive Language-Image Pre-training (CLIP) Model is a recently proposed large-scale pre-train model which attracts increasing attention in the computer vision community.

Continual Learning Image-text matching +2

Cannot find the paper you are looking for? You can Submit a new open access paper.