Multi-Person Extreme Motion Prediction

Human motion prediction aims to forecast future poses given a sequence of past 3D skeletons. While this problem has recently received increasing attention, it has mostly been tackled for single humans in isolation. In this paper, we explore this problem when dealing with humans performing collaborative tasks, we seek to predict the future motion of two interacted persons given two sequences of their past skeletons. We propose a novel cross interaction attention mechanism that exploits historical information of both persons, and learns to predict cross dependencies between the two pose sequences. Since no dataset to train such interactive situations is available, we collected ExPI (Extreme Pose Interaction), a new lab-based person interaction dataset of professional dancers performing Lindy-hop dancing actions, which contains 115 sequences with 30K frames annotated with 3D body poses and shapes. We thoroughly evaluate our cross interaction network on ExPI and show that both in short- and long-term predictions, it consistently outperforms state-of-the-art methods for single-person motion prediction.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Introduced in the Paper:

Expi
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-Person Pose forecasting Expi - common actions split XIA Average MPJPE (mm) @ 1000 ms 238 # 3
Average MPJPE (mm) @ 600 ms 162 # 3
Average MPJPE (mm) @ 400 ms 112 # 3
Average MPJPE (mm) @ 200 ms 55 # 3
Multi-Person Pose forecasting Expi - unseen actions split XIA Average MPJPE (mm) @ 800 ms 218 # 2
Average MPJPE (mm) @ 600 ms 174 # 2
Average MPJPE (mm) @ 400 ms 121 # 2

Methods


No methods listed for this paper. Add relevant methods here