Search Results for author: Emad Bahrami

Found 5 papers, 3 papers with code

How Much Temporal Long-Term Context is Needed for Action Segmentation?

1 code implementation ICCV 2023 Emad Bahrami, Gianpiero Francesca, Juergen Gall

In this work, we try to answer how much long-term temporal context is required for temporal action segmentation by introducing a transformer-based model that leverages sparse attention to capture the full context of a video.

Action Segmentation Segmentation

Robust Action Segmentation from Timestamp Supervision

no code implementations12 Oct 2022 Yaser Souri, Yazan Abu Farha, Emad Bahrami, Gianpiero Francesca, Juergen Gall

As obtaining annotations to train an approach for action segmentation in a fully supervised way is expensive, various approaches have been proposed to train action segmentation models using different forms of weak supervision, e. g., action transcripts, action sets, or more recently timestamps.

Action Segmentation Segmentation

TaylorSwiftNet: Taylor Driven Temporal Modeling for Swift Future Frame Prediction

no code implementations27 Oct 2021 Saber Pourheydari, Emad Bahrami, Mohsen Fayyaz, Gianpiero Francesca, Mehdi Noroozi, Juergen Gall

While recurrent neural networks (RNNs) demonstrate outstanding capabilities for future video frame prediction, they model dynamics in a discrete time space, i. e., they predict the frames sequentially with a fixed temporal step.

3D CNNs with Adaptive Temporal Feature Resolutions

1 code implementation CVPR 2021 Mohsen Fayyaz, Emad Bahrami, Ali Diba, Mehdi Noroozi, Ehsan Adeli, Luc van Gool, Juergen Gall

While the GFLOPs of a 3D CNN can be decreased by reducing the temporal feature resolution within the network, there is no setting that is optimal for all input clips.

Action Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.