Search Results for author: Shan Ning

Found 2 papers, 2 papers with code

Mining Fine-Grained Image-Text Alignment for Zero-Shot Captioning via Text-Only Training

1 code implementation4 Jan 2024 Longtian Qiu, Shan Ning, Xuming He

Firstly, we observe that the CLIP's visual feature of image subregions can achieve closer proximity to the paired caption due to the inherent information loss in text descriptions.

Descriptive Image Captioning +1

HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models

1 code implementation CVPR 2023 Shan Ning, Longtian Qiu, Yongfei Liu, Xuming He

In detail, we first introduce a novel interaction decoder to extract informative regions in the visual feature map of CLIP via a cross-attention mechanism, which is then fused with the detection backbone by a knowledge integration block for more accurate human-object pair detection.

Decoder Human-Object Interaction Detection +3

Cannot find the paper you are looking for? You can Submit a new open access paper.