CurlingNet: Compositional Learning between Images and Text for Fashion IQ Data

27 Mar 2020  ·  Youngjae Yu, Seunghwan Lee, Yuncheol Choi, Gunhee Kim ·

We present an approach named CurlingNet that can measure the semantic distance of composition of image-text embedding. In order to learn an effective image-text composition for the data in the fashion domain, our model proposes two key components as follows. First, the Delivery makes the transition of a source image in an embedding space. Second, the Sweeping emphasizes query-related components of fashion images in the embedding space. We utilize a channel-wise gating mechanism to make it possible. Our single model outperforms previous state-of-the-art image-text composition models including TIRG and FiLM. We participate in the first fashion-IQ challenge in ICCV 2019, for which ensemble of our model achieves one of the best performances.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Retrieval Fashion IQ CurlingNet (Recall@10+Recall@50)/2 38.45 # 16

Methods


No methods listed for this paper. Add relevant methods here