We propose a new 2D pose refinement network that learns to predict the human bias in the estimated 2D pose.
Ranked #52 on 3D Human Pose Estimation on Human3.6M
To this end, we created datasets of first-person grasping-hand images labeled with grasp types and object names, and tested a recognition pipeline leveraging object affordance.
We propose a Learning-from-Observation framework that splits and understands a video of a human demonstration with verbal instructions to extract accurate action sequences.
In the context of one-shot robot teaching, the contributions of the paper are: 1) to propose a framework that 1) covers various tasks in grasp-manipulation-release class household operations and 2) mimics human postures during the operations.
Robotics Human-Computer Interaction
For full-body reconstruction with loose clothes, we propose to use lower dimensional embeddings of texture and deformation referred to as eigen-texturing and eigen-deformation, to reproduce views of even unobserved surfaces.
In the experiments, we confirm the parameters obtained by two types of offline calibration according to the degree of freedom of robot movement and validate the effectiveness of online correction method by plotting localized position error during robot's intense movement.
Digitally unwrapping images of paper sheets is crucial for accurate document scanning and text recognition.
First, we narrow down candidates of zenith angle by degree of polarization and determine it by the intensity of thin film which increases monotonically along the zenith angle.
Our feature description is designed as two steps: 1) we normalize the detected local regions to canonical shapes for robust matching; 2) we encode each key point with multiple vectors at different Morse function values.
In this paper, we present an approach for scene understanding by reasoning physical stability of objects from point cloud.