To address these issues, we build a task-specific self-supervised pre-training framework from a data selection perspective based on a simple hypothesis that pre-training on the unlabeled samples with similar distribution to the target task can bring substantial performance gains.
The proposed UDG can not only enrich the semantic knowledge of the model by exploiting unlabeled data in an unsupervised manner, but also distinguish ID/OOD samples to enhance ID classification and OOD detection tasks simultaneously.
In this paper, we propose to label only the most representative samples to expand the labeled set.
VSGraph-LC starts from anchor selection referring to the semantic similarity between metadata and correct label concepts, and then propagates correct labels from anchors on a visual graph using graph neural network (GNN).
Ranked #9 on Image Classification on WebVision-1000
Therefore, a simple yet effective WSL framework is proposed.
Ranked #7 on Image Classification on WebVision-1000
Second, to alleviate boundary artifacts of warped clothes and make the results more realistic, we employ a Try-On Module that learns a composition mask to integrate the warped clothes and the rendered image to ensure smoothness.
We take advantage of the successful architecture called fully convolutional networks (FCN) in the field of semantic segmentation.