Learning Human Activities and Object Affordances from RGB-D Videos

4 Oct 2012  ·  Hema Swetha Koppula, Rudhir Gupta, Ashutosh Saxena ·

Understanding human activities and object affordances are two very important skills, especially for personal robots which operate in human environments. In this work, we consider the problem of extracting a descriptive labeling of the sequence of sub-activities being performed by a human, and more importantly, of their interactions with the objects in the form of associated affordances. Given a RGB-D video, we jointly model the human activities and object affordances as a Markov random field where the nodes represent objects and sub-activities, and the edges represent the relationships between object affordances, their relations with sub-activities, and their evolution over time. We formulate the learning problem using a structural support vector machine (SSVM) approach, where labelings over various alternate temporal segmentations are considered as latent variables. We tested our method on a challenging dataset comprising 120 activity videos collected from 4 subjects, and obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and 75.0% for high-level activity labeling. We then demonstrate the use of such descriptive labeling in performing assistive tasks by a PR2 robot.

PDF Abstract


Introduced in the Paper:

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Skeleton Based Action Recognition CAD-120 KGS Accuracy 86.0% # 3


No methods listed for this paper. Add relevant methods here