A Strong Baseline for Fashion Retrieval with Person Re-Identification Models

9 Mar 2020  ·  Mikolaj Wieczorek, Andrzej Michalowski, Anna Wroblewska, Jacek Dabrowski ·

Fashion retrieval is the challenging task of finding an exact match for fashion items contained within an image. Difficulties arise from the fine-grained nature of clothing items, very large intra-class and inter-class variance. Additionally, query and source images for the task usually come from different domains - street photos and catalogue photos respectively. Due to these differences, a significant gap in quality, lighting, contrast, background clutter and item presentation exists between domains. As a result, fashion retrieval is an active field of research both in academia and the industry. Inspired by recent advancements in Person Re-Identification research, we adapt leading ReID models to be used in fashion retrieval tasks. We introduce a simple baseline model for fashion retrieval, significantly outperforming previous state-of-the-art results despite a much simpler architecture. We conduct in-depth experiments on Street2Shop and DeepFashion datasets and validate our results. Finally, we propose a cross-domain (cross-dataset) evaluation method to test the robustness of fashion retrieval models.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Retrieval DeepFashion - Consumer-to-shop RST Model (ResNet50-IBN-A, 320x320) mAP 43.0 # 2
Rank-1 37.8 # 1
Rank-10 71.1 # 2
Rank-20 77.2 # 2
Rank-50 84.1 # 2
Image Retrieval Exact Street2Shop RST Model (ResNet50-IBN-A, 320x320) mAP 46.8 # 3
Rank-1 53.7 # 1
Rank-10 69.8 # 2
Rank-20 73.6 # 2


No methods listed for this paper. Add relevant methods here