Multi-View Learning is a machine learning framework where data are represented by multiple distinct feature groups, and each feature group is referred to as a particular view.
In the user encoder we learn the representations of users based on their browsed news and apply attention mechanism to select informative news for user representation learning.
In this paper, we leverage a key insight that retrieving sentences expressing a relation is a dual task of predicting relation label for a given sentence---two tasks are complementary to each other and can be optimized jointly for mutual enhancement.
A new algorithmic framework is proposed for learning autoencoders of data distributions.
We present deep variational canonical correlation analysis (VCCA), a deep multi-view learning model that extends the latent variable model interpretation of linear CCA to nonlinear observation models parameterized by deep neural networks.
In this paper, we study two challenging problems in incomplete multi-view clustering analysis, namely, i) how to learn an informative and consistent representation among different views without the help of labels and ii) how to recover the missing views from data.
Ranked #1 on Incomplete multi-view clustering on Noisy MNIST
Recent experiments have shown that when the discriminator is provided with domain information in both domains and label information in the source domain, it is able to preserve the complex multimodal information and high semantic information in both domains.
Ranked #3 on Domain Adaptation on ImageCLEF-DA