ASPnet: Action Segmentation With Shared-Private Representation of Multiple Data Sources

Most state-of-the-art methods for action segmentation are based on single input modalities or naive fusion of multiple data sources. However, effective fusion of complementary information can potentially strengthen segmentation models and make them more robust to sensor noise and more accurate with smaller training datasets. In order to improve multimodal representation learning for action segmentation, we propose to disentangle hidden features of a multi-stream segmentation model into modality-shared components, containing common information across data sources, and private components; we then use an attention bottleneck to capture long-range temporal dependencies in the data while preserving disentanglement in consecutive processing layers. Evaluation on 50salads, Breakfast and RARP45 datasets shows that our multimodal approach outperforms different data fusion baselines on both multiview and multimodal data sources, obtaining competitive or better results compared with the state-of-the-art. Our model is also more robust to additive sensor noise and can achieve performance on par with strong video baselines even with less training data.

PDF Abstract

Results from the Paper


 Ranked #1 on Action Segmentation on 50 Salads (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Action Segmentation 50 Salads Br-Prompt+ASPnet (RGB, flow, accelerometer) F1@10% 92.7 # 1
Edit 87.5 # 2
Acc 91.4 # 1
F1@25% 91.6 # 1
F1@50% 88.5 # 1
Action Segmentation Breakfast ASPnet F1@10% 78.1 # 6
F1@50% 60.8 # 6
Acc 75.9 # 5
Edit 76.3 # 9
F1@25% 72.9 # 6

Methods


No methods listed for this paper. Add relevant methods here