Classifying cooking object's state using a tuned VGG convolutional neural network

23 May 2018  ·  Rahul Paul ·

In robotics, knowing the object states and recognizing the desired states are very important. Objects at different states would require different grasping. To achieve different states, different manipulations would be required, as well as different grasping. To analyze the objects at different states, a dataset of cooking objects was created. Cooking consists of various cutting techniques needed for different dishes (e.g. diced, julienne etc.). Identifying each of this state of cooking objects by the human can be difficult sometimes too. In this paper, we have analyzed seven different cooking object states by tuning a convolutional neural network (CNN). For this task, images were downloaded and annotated by students and they are divided into training and a completely different test set. By tuning the vgg-16 CNN 77% accuracy was obtained. The work presented in this paper focuses on classification between various object states rather than task recognition or recipe prediction. This framework can be easily adapted in any other object state classification activity.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods