Neural Fashion Image Captioning : Accounting for Data Diversity

23 Jun 2021  ·  Gilles Hacheme, Noureini Sayouti ·

Image captioning has increasingly large domains of application, and fashion is not an exception. Having automatic item descriptions is of great interest for fashion web platforms, sometimes hosting hundreds of thousands of images. This paper is one of the first to tackle image captioning for fashion images. To address dataset diversity issues, we introduced the InFashAIv1 dataset containing almost 16.000 African fashion item images with their titles, prices, and general descriptions. We also used the well-known DeepFashion dataset in addition to InFashAIv1. Captions are generated using the Show and Tell model made of CNN encoder and RNN Decoder. We showed that jointly training the model on both datasets improves captions quality for African style fashion images, suggesting a transfer learning from Western style data. The InFashAIv1 dataset is released on Github to encourage works with more diversity inclusion.

PDF Abstract

Datasets


Introduced in the Paper:

InFashAI

Used in the Paper:

DeepFashion

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here