Peeking into the Future: Predicting Future Person Activities and Locations in Videos

Deciphering human behaviors to predict their future paths/trajectories and what they would do from videos is important in many applications. Motivated by this idea, this paper studies predicting a pedestrian's future path jointly with future activities. We propose an end-to-end, multi-task learning system utilizing rich visual features about human behavioral information and interaction with their surroundings. To facilitate the training, the network is learned with an auxiliary task of predicting future location in which the activity will happen. Experimental results demonstrate our state-of-the-art performance over two public benchmarks on future trajectory prediction. Moreover, our method is able to produce meaningful future activity prediction in addition to the path. The result provides the first empirical evidence that joint modeling of paths and activities benefits future path prediction.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Activity Prediction ActEV Next mAP 0.192 # 1
Trajectory Forecasting ActEV Next ADE-8/12 17.99 # 2
Trajectory Prediction ActEV Next ADE-8/12 17.99 # 3
FDE-8/12 37.24 # 4
Trajectory Prediction ETH/UCY Next ADE-8/12 0.46 # 16


No methods listed for this paper. Add relevant methods here