MULTI-VIEW LEARNING

46 papers with code • 0 benchmarks • 1 datasets

Multi-View Learning is a machine learning framework where data are represented by multiple distinct feature groups, and each feature group is referred to as a particular view.

Source: Dissimilarity-based representation for radiomics applications

Libraries

Use these libraries to find MULTI-VIEW LEARNING models and implementations

Datasets


Most implemented papers

Neural News Recommendation with Attentive Multi-View Learning

microsoft/recommenders 12 Jul 2019

In the user encoder we learn the representations of users based on their browsed news and apply attention mechanism to select informative news for user representation learning.

Trusted Multi-View Classification

hanmenghan/TMC ICLR 2021

To this end, we propose a novel multi-view classification method, termed trusted multi-view classification, which provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.

Tensor Canonical Correlation Analysis for Multi-view Dimension Reduction

jameschapman19/cca_zoo 9 Feb 2015

As a consequence, the high order correlation information contained in the different views is explored and thus a more reliable common subspace shared by all features can be obtained.

Variational Distillation for Multi-View Learning

FutabaSakuraXD/Farewell-to-Mutual-Information-Variational-Distiilation-for-Cross-Modal-Person-Re-identification 20 Jun 2022

Information Bottleneck (IB) based multi-view learning provides an information theoretic principle for seeking shared information contained in heterogeneous data descriptions.

Learning Autoencoders with Relational Regularization

HongtengXu/Relational-AutoEncoders ICML 2020

A new algorithmic framework is proposed for learning autoencoders of data distributions.

Deep brain state classification of MEG data

SMehrkanoon/Deep-brain-state-classification-of-MEG-data 2 Jul 2020

The experimental results of cross subject multi-class classification on the studied MEG dataset show that the inclusion of attention improves the generalization of the models across subjects.

Farewell to Mutual Information: Variational Distillation for Cross-Modal Person Re-Identification

FutabaSakuraXD/Farewell-to-Mutual-Information-Variational-Distiilation-for-Cross-Modal-Person-Re-identification CVPR 2021

The Information Bottleneck (IB) provides an information theoretic principle for representation learning, by retaining all information relevant for predicting label while minimizing the redundancy.

Conditional Random Field Autoencoders for Unsupervised Structured Prediction

ldmt-muri/alignment-with-openfst NeurIPS 2014

We introduce a framework for unsupervised learning of structured predictors with overlapping, global features.

Patterns for Learning with Side Information

tu-rbo/concarne 19 Nov 2015

Supervised, semi-supervised, and unsupervised learning estimate a function given input/output samples.

A Survey on Multi-Task Learning

markWJJ/Multitask-Learning 25 Jul 2017

Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks.