AHP (Amodal Human Perception)

Introduced by Zhou et al. in Human De-occlusion: Invisible Perception and Recovery for Humans

The AHP dataset consists of 56,599 images in total which are collected from several large-scale instance segmentation and detection datasets, including COCO, VOC (w/ SBD), LIP, Objects365 and OpenImages. Each image is annotated with a pixel-level segmentation mask of a single integrated human.

The dataset is initially proposed to solve the task of human de-occlusion.

Data Splits
  • Train: Totally 56,302 images with annotations of integrated humans.
  • Valid: Totally 297 images of synthesized occlusion cases.
  • Test: Totally 56 images of artificial occlusion cases.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages