Paper

Neural Fashion Image Captioning : Accounting for Data Diversity

Image captioning has increasingly large domains of application, and fashion is not an exception. Having automatic item descriptions is of great interest for fashion web platforms, sometimes hosting hundreds of thousands of images. This paper is one of the first to tackle image captioning for fashion images. To address dataset diversity issues, we introduced the InFashAIv1 dataset containing almost 16.000 African fashion item images with their titles, prices, and general descriptions. We also used the well-known DeepFashion dataset in addition to InFashAIv1. Captions are generated using the Show and Tell model made of CNN encoder and RNN Decoder. We showed that jointly training the model on both datasets improves captions quality for African style fashion images, suggesting a transfer learning from Western style data. The InFashAIv1 dataset is released on Github to encourage works with more diversity inclusion.

Results in Papers With Code
(↓ scroll down to see all results)