Recipe recognition with large multimodal food dataset

This paper deals with automatic systems for image recipe recognition. For this purpose, we compare and evaluate leading vision-based and text-based technologies on a new very large multimodal dataset (UPMC Food-101) containing about 100,000 recipes for a total of 101 food categories. Each item in this dataset is represented by one image plus textual information. We present deep experiments of recipe recognition on our dataset using visual, textual information and fusion. Additionally, we present experiments with text-based embedding technology to represent any food word in a semantical continuous space. We also compare our dataset features with a twin dataset provided by ETHZ university: we revisit their data collection protocols and carry out transfer learning schemes to highlight similarities and differences between both datasets. Finally, we propose a real application for daily users to identify recipes. This application is a web search engine that allows any mobile device to send a query image and retrieve the most relevant recipes in our dataset.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here