Search Results for author: Xiaodan Wang

Found 4 papers, 1 papers with code

AspectMMKG: A Multi-modal Knowledge Graph with Aspect-aware Entities

1 code implementation9 Aug 2023 Jingdan Zhang, Jiaan Wang, Xiaodan Wang, Zhixu Li, Yanghua Xiao

Multi-modal knowledge graphs (MMKGs) combine different modal data (e. g., text and image) for a comprehensive understanding of entities.

Extract Aspect Image Retrieval +2

ConaCLIP: Exploring Distillation of Fully-Connected Knowledge Interaction Graph for Lightweight Text-Image Retrieval

no code implementations28 May 2023 Jiapeng Wang, Chengyu Wang, Xiaodan Wang, Jun Huang, Lianwen Jin

Large-scale pre-trained text-image models with dual-encoder architectures (such as CLIP) are typically adopted for various vision-language applications, including text-image retrieval.

Image Retrieval Knowledge Distillation +2

Multilayer Fisher extreme learning machine for classification

no code implementations Complex & Intelligent Systems 2022 Jie Lai, Xiaodan Wang, Qian Xiang, Jian Wang, Lei Lei

To address this problem, a novel Fisher extreme learning machine autoencoder (FELM-AE) is proposed and is used as the component for the multilayer Fisher extreme leaning machine (ML-FELM).

Classification Denoising +1

Multi-Modal Knowledge Graph Construction and Application: A Survey

no code implementations11 Feb 2022 Xiangru Zhu, Zhixu Li, Xiaodan Wang, Xueyao Jiang, Penglei Sun, Xuwu Wang, Yanghua Xiao, Nicholas Jing Yuan

In this survey on MMKGs constructed by texts and images, we first give definitions of MMKGs, followed with the preliminaries on multi-modal tasks and techniques.

graph construction Knowledge Graphs +1

Cannot find the paper you are looking for? You can Submit a new open access paper.