Robust Multi-view Representation Learning

1 Jan 2021  ·  Sibi Venkatesan, Kyle Miller, Artur Dubrawski ·

Multi-view data has become ubiquitous, especially with multi-sensor systems like self-driving cars or medical patient-side monitors. We look at modeling multi-view data through robust representation learning, with the goal of leveraging relationships between views and building resilience to missing information. We propose a new flavor of multi-view AutoEncoders, the Robust Multi-view AutoEncoder, which explicitly encourages robustness to missing views. The principle we use is straightforward: we apply the idea of drop-out to the level of views. During training, we leave out views as input to our model while forcing it to reconstruct all of them. We also consider a flow-based generative modeling extension of our approach in the case where all the views are available. We conduct experiments for different scenarios: directly using the learned representations for reconstruction, as well as a two-step process where the learned representation is subsequently used as features for the data for a down-stream application. Our synthetic and real-world experiments show promising results for the application of these models to robust representation learning.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods