Training on synthetic data is becoming popular in vision due to the
convenient acquisition of accurate pixel-level labels. But the domain gap
between synthetic and real images significantly degrades the performance of the
We propose a color space adaptation method to bridge the gap. A
set of closed-form operations are adopted to make color space adjustments while
preserving the labels. We embed these operations into a two-stage learning
approach, and demonstrate the adaptation efficacy on the semantic segmentation
task of cirrus clouds.