MULTI-VIEW LEARNING
49 papers with code • 0 benchmarks • 1 datasets
Multi-View Learning is a machine learning framework where data are represented by multiple distinct feature groups, and each feature group is referred to as a particular view.
Source: Dissimilarity-based representation for radiomics applications
Benchmarks
These leaderboards are used to track progress in MULTI-VIEW LEARNING
Libraries
Use these libraries to find MULTI-VIEW LEARNING models and implementationsMost implemented papers
Patterns for Learning with Side Information
Supervised, semi-supervised, and unsupervised learning estimate a function given input/output samples.
A Survey on Multi-Task Learning
Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks.
Dual Memory Neural Computer for Asynchronous Two-view Sequential Learning
One of the core tasks in multi-view learning is to capture relations among views.
Integrative Multi-View Reduced-Rank Regression: Bridging Group-Sparse and Low-Rank Models
Multi-view data have been routinely collected in various fields of science and engineering.
Multi-Multi-View Learning: Multilingual and Multi-Representation Entity Typing
For representation, we consider representations based on the context distribution of the entity (i. e., on its embedding), on the entity's name (i. e., on its surface form) and on its description in Wikipedia.
Robust Visual Tracking using Multi-Frame Multi-Feature Joint Modeling
It remains a huge challenge to design effective and efficient trackers under complex scenarios, including occlusions, illumination changes and pose variations.
Ensemble of Multi-View Learning Classifiers for Cross-Domain Iris Presentation Attack Detection
The adoption of large-scale iris recognition systems around the world has brought to light the importance of detecting presentation attack images (textured contact lenses and printouts).
Deep Collective Matrix Factorization for Augmented Multi-View Learning
In this paper, we develop the first deep-learning based method, called dCMF, for unsupervised learning of multiple shared representations, that can model such non-linear interactions, from an arbitrary collection of matrices.
Deep Multimodality Model for Multi-task Multi-view Learning
However, there is no existing deep learning algorithm that jointly models task and view dual heterogeneity, particularly for a data set with multiple modalities (text and image mixed data set or text and video mixed data set, etc.).
Learning Dual Retrieval Module for Semi-supervised Relation Extraction
In this paper, we leverage a key insight that retrieving sentences expressing a relation is a dual task of predicting relation label for a given sentence---two tasks are complementary to each other and can be optimized jointly for mutual enhancement.