Multi-modal Knowledge Graph
14 papers with code • 0 benchmarks • 2 datasets
Link to A Survey for Multi-modal Knowledge Graphs. Papers integrating Knowledge Graphs (KGs) and Multi-Modal Learning, focusing on research in two principal aspects: KG-driven Multi-Modal (KG4MM) learning, where KGs support multi-modal tasks, and Multi-Modal Knowledge Graph (MM4KG), which extends KG studies into the MMKG realm.
Benchmarks
These leaderboards are used to track progress in Multi-modal Knowledge Graph
Most implemented papers
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey
In this survey, we carefully review over 300 articles, focusing on KG-aware research in two principal aspects: KG-driven Multi-Modal (KG4MM) learning, where KGs support multi-modal tasks, and Multi-Modal Knowledge Graph (MM4KG), which extends KG studies into the MMKG realm.
MyGO: Discrete Modality Information as Fine-Grained Tokens for Multi-modal Knowledge Graph Completion
To overcome their inherent incompleteness, multi-modal knowledge graph completion (MMKGC) aims to discover unobserved knowledge from given MMKGs, leveraging both structural information from the triples and multi-modal information of the entities.
LingYi: Medical Conversational Question Answering System based on Multi-modal Knowledge Graphs
The medical conversational system can relieve the burden of doctors and improve the efficiency of healthcare, especially during the pandemic.
Multi-modal Siamese Network for Entity Alignment
To deal with that problem, in this paper, we propose a novel Multi-modal Siamese Network for Entity Alignment (MSNEA) to align entities in different MMKGs, in which multi-modal knowledge could be comprehensively leveraged by the exploitation of inter-modal effect.
Modality-Aware Negative Sampling for Multi-modal Knowledge Graph Embedding
Negative sampling (NS) is widely used in knowledge graph embedding (KGE), which aims to generate negative triples to make a positive-negative contrast during training.
AspectMMKG: A Multi-modal Knowledge Graph with Aspect-aware Entities
Multi-modal knowledge graphs (MMKGs) combine different modal data (e. g., text and image) for a comprehensive understanding of entities.
MACO: A Modality Adversarial and Contrastive Framework for Modality-missing Multi-modal Knowledge Graph Completion
Nevertheless, existing methods emphasize the design of elegant KGC models to facilitate modality interaction, neglecting the real-life problem of missing modalities in KGs.
Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity Alignment
To address these challenges, we propose a novel MMEA transformer, called MoAlign, that hierarchically introduces neighbor features, multi-modal attributes, and entity types to enhance the alignment task.
Towards Semantic Consistency: Dirichlet Energy Driven Robust Multi-Modal Entity Alignment
This study introduces a novel approach, DESAlign, which addresses these issues by applying a theoretical framework based on Dirichlet energy to ensure semantic consistency.
Unleashing the Power of Imbalanced Modality Information for Multi-modal Knowledge Graph Completion
To address the mentioned problems, we propose Adaptive Multi-modal Fusion and Modality Adversarial Training (AdaMF-MAT) to unleash the power of imbalanced modality information for MMKGC.