Search Results for author: Xiang An

Found 15 papers, 10 papers with code

ORID: Organ-Regional Information Driven Framework for Radiology Report Generation

no code implementations20 Nov 2024 Tiancheng Gu, Kaicheng Yang, Xiang An, Ziyong Feng, Dongnan Liu, Weidong Cai

To advance these approaches, this paper introduces an Organ-Regional Information Driven (ORID) framework which can effectively integrate multi-modal information and reduce the influence of noise from unrelated organs.

Decoder Graph Neural Network

Croc: Pretraining Large Multimodal Models with Cross-Modal Comprehension

1 code implementation18 Oct 2024 Yin Xie, Kaicheng Yang, Ninghua Yang, Weimo Deng, Xiangzi Dai, Tiancheng Gu, Yumeng Wang, Xiang An, Yongle Zhao, Ziyong Feng, Roy Miles, Ismail Elezi, Jiankang Deng

Then, we conceptualize visual tokens as analogous to a "foreign language" for the LLMs and propose a mixed attention mechanism with bidirectional visual attention and unidirectional textual attention to comprehensively enhance the understanding of visual tokens.

Caption Generation

CLIP-CID: Efficient CLIP Distillation via Cluster-Instance Discrimination

no code implementations18 Aug 2024 Kaicheng Yang, Tiancheng Gu, Xiang An, Haiqiang Jiang, Xiangzi Dai, Ziyong Feng, Weidong Cai, Jiankang Deng

In this paper, we introduce CLIP-CID, a novel distillation mechanism that effectively transfers knowledge from a large vision-language foundation model to a smaller model.

Knowledge Distillation Transfer Learning +1

VAR-CLIP: Text-to-Image Generator with Visual Auto-Regressive Modeling

1 code implementation2 Aug 2024 Qian Zhang, Xiangzi Dai, Ninghua Yang, Xiang An, Ziyong Feng, Xingyu Ren

However, the original VAR model is constrained to class-conditioned synthesis, relying solely on textual captions for guidance.

Image Generation

Multi-label Cluster Discrimination for Visual Representation Learning

1 code implementation24 Jul 2024 Xiang An, Kaicheng Yang, Xiangzi Dai, Ziyong Feng, Jiankang Deng

In this paper, we propose a novel Multi-Label Cluster Discrimination method named MLCD to enhance representation learning.

 Ranked #1 on Referring Expression Segmentation on RefCOCOg-val (using extra training data)

Contrastive Learning Image-text Retrieval +9

High-Fidelity Facial Albedo Estimation via Texture Quantization

no code implementations19 Jun 2024 Zimin Ran, Xingyu Ren, Xiang An, Kaicheng Yang, Xiangzi Dai, Ziyong Feng, Jia Guo, Linchao Zhu, Jiankang Deng

In this paper, we present a novel facial albedo reconstruction model, HiFiAlbedo, which recovers the albedo map directly from a single image without the need for captured albedo data.

3D Face Reconstruction Quantization

RWKV-CLIP: A Robust Vision-Language Representation Learner

2 code implementations11 Jun 2024 Tiancheng Gu, Kaicheng Yang, Xiang An, Ziyong Feng, Dongnan Liu, Weidong Cai, Jiankang Deng

Contrastive Language-Image Pre-training (CLIP) has significantly improved performance in various vision-language tasks by expanding the dataset with image-text pairs obtained from websites.

Image-text Retrieval Representation Learning +2

Plug-and-Play Grounding of Reasoning in Multimodal Large Language Models

no code implementations28 Mar 2024 Jiaxing Chen, Yuxuan Liu, Dehu Li, Xiang An, Weimo Deng, Ziyong Feng, Yongle Zhao, Yin Xie

P2G utilizes the tool-usage potential of MLLMs to employ expert agents for on-the-fly grounding of reasoning into critical visual and textual elements in images, thereby enabling deliberate reasoning through multimodal prompting.

Instruction Following Visual Reasoning

IDAdapter: Learning Mixed Features for Tuning-Free Personalization of Text-to-Image Models

no code implementations20 Mar 2024 Siying Cui, Jia Guo, Xiang An, Jiankang Deng, Yongle Zhao, Xinyu Wei, Ziyong Feng

Leveraging Stable Diffusion for the generation of personalized portraits has emerged as a powerful and noteworthy tool, enabling users to create high-fidelity, custom character avatars based on their specific prompts.

Diversity Image Generation +1

ALIP: Adaptive Language-Image Pre-training with Synthetic Caption

1 code implementation ICCV 2023 Kaicheng Yang, Jiankang Deng, Xiang An, Jiawei Li, Ziyong Feng, Jia Guo, Jing Yang, Tongliang Liu

However, the presence of intrinsic noise and unmatched image-text pairs in web data can potentially affect the performance of representation learning.

Image-text Retrieval Representation Learning +1

Unicom: Universal and Compact Representation Learning for Image Retrieval

3 code implementations12 Apr 2023 Xiang An, Jiankang Deng, Kaicheng Yang, Jaiwei Li, Ziyong Feng, Jia Guo, Jing Yang, Tongliang Liu

To further enhance the low-dimensional feature representation, we randomly select partial feature dimensions when calculating the similarities between embeddings and class-wise prototypes.

Image Retrieval Metric Learning +4

Killing Two Birds with One Stone:Efficient and Robust Training of Face Recognition CNNs by Partial FC

6 code implementations28 Mar 2022 Xiang An, Jiankang Deng, Jia Guo, Ziyong Feng, Xuhan Zhu, Jing Yang, Tongliang Liu

In each iteration, positive class centers and a random subset of negative class centers are selected to compute the margin-based softmax loss.

Face Recognition Face Verification

Killing Two Birds With One Stone: Efficient and Robust Training of Face Recognition CNNs by Partial FC

1 code implementation CVPR 2022 Xiang An, Jiankang Deng, Jia Guo, Ziyong Feng, Xuhan Zhu, Jing Yang, Tongliang Liu

In each iteration, positive class centers and a random subset of negative class centers are selected to compute the margin-based softmax loss.

Face Recognition

Masked Face Recognition Challenge: The InsightFace Track Report

1 code implementation18 Aug 2021 Jiankang Deng, Jia Guo, Xiang An, Zheng Zhu, Stefanos Zafeiriou

In this workshop, we organize Masked Face Recognition (MFR) challenge and focus on bench-marking deep face recognition methods under the existence of facial masks.

Face Recognition

Partial FC: Training 10 Million Identities on a Single Machine

7 code implementations11 Oct 2020 Xiang An, Xuhan Zhu, Yang Xiao, Lan Wu, Ming Zhang, Yuan Gao, Bin Qin, Debing Zhang, Ying Fu

The experiment demonstrates no loss of accuracy when training with only 10\% randomly sampled classes for the softmax-based loss functions, compared with training with full classes using state-of-the-art models on mainstream benchmarks.

Face Identification Face Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.