HACS: Human Action Clips and Segments Dataset for Recognition and Temporal Localization

ICCV 2019 Hang ZhaoAntonio TorralbaLorenzo TorresaniZhicheng Yan

This paper presents a new large-scale dataset for recognition and temporal localization of human actions collected from Web videos. We refer to it as HACS (Human Action Clips and Segments)... (read more)

PDF Abstract

Evaluation Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.