In addition to various spatial fusion-based methods, an affinity fusion-based network is also proposed in which the self-expressive layer corresponding to different modalities is enforced to be the same.
Ranked #1 on
Image Clustering
on Extended Yale-B
MULTI-MODAL SUBSPACE CLUSTERING MULTIVIEW LEARNING MULTI-VIEW SUBSPACE CLUSTERING
In many modern applications from, for example, bioinformatics and computer vision, samples have multiple feature representations coming from different data sources.
To address this problem and inspired by recent works in adversarial learning, we propose a multiple kernel clustering method with the min-max framework that aims to be robust to such adversarial perturbation.
Different experiments on three publicly available datasets show the efficiency of the proposed approach with respect to state-of-art models.
DOCUMENT CLASSIFICATION MULTILINGUAL TEXT CLASSIFICATION MULTIVIEW LEARNING
We tackle the issue of classifier combinations when observations have multiple views.
DOCUMENT CLASSIFICATION MULTILINGUAL TEXT CLASSIFICATION MULTIVIEW LEARNING
For representation, we consider representations based on the context distribution of the entity (i. e., on its embedding), on the entity's name (i. e., on its surface form) and on its description in Wikipedia.