Cross-Attentional Audio-Visual Fusion for Weakly-Supervised Action Localization

Temporally localizing actions in videos is one of the key components for video understanding. Learning from weakly-labelled data is seen a potential solution towards avoiding expensive frame-level annotations. Different to others works, which only depend on the visual-modality, we propose to learn richer audio-visual representations for weakly-supervised action localization. First, we propose a multi-stage cross-attention mechanism to collaboratively fuse audio and visual features, which preserves the intra-modal characteristics. Second, to model both foreground and background frames, we construct an open-max classifier, which treats the background class as an open-set. Third, for precise action localization, we design consistency losses to enforce temporal continuity for the action-class prediction, and also help with foreground-prediction reliability. Extensive experiments on two publicly available video-datasets (AVE and ActivityNet1.2) show that the proposed method effectively fuses audio and visual modalities, and achieves state-of-the-art results for weakly-supervised action localization.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here