Residual Contrastive Learning: Unsupervised Representation Learning from Residuals

In the era of deep learning, supervised residual learning (ResL) has led to many breakthroughs in low-level vision such as image restoration and enhancement tasks. However, the question of how to formalize and take advantage of unsupervised ResL remains open. In this paper we consider visual signals with additive noise and propose to build a connection between ResL and self-supervised learning (SSL) via contrastive learning. We present residual contrastive learning (RCL), an unsupervised representation learning framework for downstream low-level vision tasks with noisy inputs. While supervised image reconstruction tasks aim to minimize the residual terms directly, RCL formulates an instance-wise discrimination pretext task by using the residuals as the discriminative feature. Empirical results on low-level vision tasks show that RCL is able to learn more robust and transferable representations in comparison to other SSL frameworks when ingesting noisy images, whilst retaining significantly reduced annotation costs over fully supervised alternatives.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods