One-Shot Concept Learning by Simulating Evolutionary Instinct Development

27 Aug 2017  ·  Abrar Ahmed, Anish Bikmal ·

Object recognition has become a crucial part of machine learning and computer vision recently. The current approach to object recognition involves Deep Learning and uses Convolutional Neural Networks to learn the pixel patterns of the objects implicitly through backpropagation. However, CNNs require thousands of examples in order to generalize successfully and often require heavy computing resources for training. This is considered rather sluggish when compared to the human ability to generalize and learn new categories given just a single example. Additionally, CNNs make it difficult to explicitly programmatically modify or intuitively interpret their learned representations. We propose a computational model that can successfully learn an object category from as few as one example and allows its learning style to be tailored explicitly to a scenario. Our model decomposes each image into two attributes: shape and color distribution. We then use a Bayesian criterion to probabilistically determine the likelihood of each category. The model takes each factor into account based on importance and calculates the conditional probability of the object belonging to each learned category. Our model is not only applicable to visual scenarios, but can also be implemented in a broader and more practical scope of situations such as Natural Language Processing as well as other places where it is possible to retrieve and construct individual attributes. Because the only condition our model presents is the ability to retrieve and construct individual attributes such as shape and color, it can be applied to essentially any class of visual objects.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here