Paper

Knowledge Fusion via Embeddings from Text, Knowledge Graphs, and Images

We present a baseline approach for cross-modal knowledge fusion. Different basic fusion methods are evaluated on existing embedding approaches to show the potential of joining knowledge about certain concepts across modalities in a fused concept representation.

Results in Papers With Code
(↓ scroll down to see all results)