AROS: Affordance Recognition with One-Shot Human Stances

21 Oct 2022  ·  Abel Pacheco-Ortega, Walterio Mayol-Cuevas ·

We present AROS, a one-shot learning approach that uses an explicit representation of interactions between highly-articulated human poses and 3D scenes. The approach is one-shot as the method does not require re-training to add new affordance instances. Furthermore, only one or a small handful of examples of the target pose are needed to describe the interaction. Given a 3D mesh of a previously unseen scene, we can predict affordance locations that support the interactions and generate corresponding articulated 3D human bodies around them. We evaluate on three public datasets of scans of real environments with varied degrees of noise. Via rigorous statistical analysis of crowdsourced evaluations, results show that our one-shot approach outperforms data-intensive baselines by up to 80\%.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here