Learning Representation in Colour Conversion

1 Jan 2021  ·  Arash Akbarinia, Raquel Gil-Rodriguez, Alban Flachot, Matteo Toscani ·

Colours can be represented in an infinite set of spaces highlighting distinct features. In this work, we study the structure of colour representation in variational autoencoders (VAEs) and investigate whether a specific organisation of colours yields higher encoding efficiency. To this end, we propose a novel unsupervised task: colour space conversion (ColourConvNets). We trained several instances of VAEs whose input and output are in different colour spaces, e.g. from RGB to CIE L*a*b* (in total five colour spaces were examined). This allows us to systematically study the influence of input-output colour spaces on the representation learnt in VAEs. We thoroughly analysed the finite embedding space of vector quantised VAEs with three different methods (single feature, hue shift and linear transformation). The interpretations reached with these techniques are in agreement suggesting that (i) luminance and chromatic information are encoded in separate embedding vectors, and (ii) the structure of network's embedding space is determined by the output colour space. Evaluation of a large number of networks demonstrates that ColourConvNets with decorrelated output colour spaces produce higher quality images with a lower pixel-wise colour difference (1-2 DeltaE).  We further assess the ColourConvNets capacity in reconstructing the global content of an image in two downstream tasks: image classification (ImageNet) and scene segmentation networks (COCO). Our results show, with respect to the baseline network (whose input and output are RGB) 5-10% higher classification accuracy is obtained with decorrelating ColourConvNets.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here