Robots require knowledge about objects in order to efficiently perform
various household tasks involving objects. The existing knowledge bases for
robots acquire symbolic knowledge about objects from manually-coded external
common sense knowledge bases such as ConceptNet, Word-Net etc...
The problem with
such approaches is the discrepancy between human-centric symbolic knowledge and
robot-centric object perception due to its limited perception capabilities. Ultimately, significant portion of knowledge in the knowledge base remains
ungrounded into robot's perception. To overcome this discrepancy, we propose an
approach to enable robots to generate robot-centric symbolic knowledge about
objects from their own sensory data, thus, allowing them to assemble their own
conceptual understanding of objects. With this goal in mind, the presented
paper elaborates on the work-in-progress of the proposed approach followed by
the preliminary results.