A2D Sentences (Sentences for the Actor-Action Dataset (A2D))

Introduced by Gavrilyuk et al. in Actor and Action Video Segmentation from a Sentence

The Actor-Action Dataset (A2D) by Xu et al. [29] serves as the largest video dataset for the general actor and action segmentation task. It contains 3,782 videos from YouTube with pixel-level labeled actors and their actions. The dataset includes eight different actions, while a total of seven actor classes are considered to perform those actions. We follow [29], who split the dataset into 3,036 training videos and 746 testing videos.

As we are interested in pixel-level actor and action segmentation from sentences, we augment the videos in A2D with natural language descriptions about what each actor is doing in the videos. Following the guidelines set forth in [12], we ask our annotators for a discriminative referring expression of each actor instance if multiple objects are considered in a video. The annotation process resulted in a total of 6,656 sentences, including 811 different nouns, 225 verbs and 189 adjectives. Our sentences enrich the actor and action pairs from the A2D dataset with finer granularities. For example, the actor adult in A2D may be annotated with man, woman, person and player in our sentences, while action rolling may also refer to flipping, sliding, moving and running when describing different actors in different scenarios. Our sentences contain on average more words than the ReferIt dataset [12] (7.3 vs 4.7), even when we leave out prepositions, articles and linking verbs (4.5 vs 3.6). This makes sense as our sentences contain a variety of verbs while existing referring expression datasets mostly ignore verbs.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages