Although previous approaches pre-define the type of dataset bias to prevent the network from learning it, recognizing the bias type in the real dataset is often prohibitive.
Ranked #3 on Facial Attribute Classification on bFFHQ
Deep neural networks for automatic image colorization often suffer from the color-bleeding artifact, a problematic color spreading near the boundaries between adjacent objects.
To this end, our method learns the disentangled representation of (1) the intrinsic attributes (i. e., those inherently defining a certain class) and (2) bias attributes (i. e., peripheral attributes causing the bias), from a large number of bias-aligned samples, the bias attributes of which have strong correlation with the target variable.
However, it is difficult to prepare for a training data set that has a sufficient amount of semantically meaningful pairs of images as well as the ground truth for a colored image reflecting a given reference (e. g., coloring a sketch of an originally blue car given a reference green car).
Disentangling content and style information of an image has played an important role in recent success in image translation.