Search Results for author: Alejandro Pardo

Found 9 papers, 5 papers with code

Combating Missing Modalities in Egocentric Videos at Test Time

no code implementations23 Apr 2024 Merey Ramazanova, Alejandro Pardo, Bernard Ghanem, Motasem Alfarra

Understanding videos that contain multiple modalities is crucial, especially in egocentric videos, where combining various sensory inputs significantly improves tasks like action recognition and moment localization.

Action Recognition Test-time Adaptation

Exploring Missing Modality in Multimodal Egocentric Datasets

no code implementations21 Jan 2024 Merey Ramazanova, Alejandro Pardo, Humam Alwassel, Bernard Ghanem

Multimodal video understanding is crucial for analyzing egocentric videos, where integrating multiple sensory signals significantly enhances action recognition and moment localization.

Action Recognition Video Understanding

Revisiting Test Time Adaptation under Online Evaluation

1 code implementation10 Apr 2023 Motasem Alfarra, Hani Itani, Alejandro Pardo, Shyma Alhuwaider, Merey Ramazanova, Juan C. Pérez, Zhipeng Cai, Matthias Müller, Bernard Ghanem

To address this issue, we propose a more realistic evaluation protocol for TTA methods, where data is received in an online fashion from a constant-speed data stream, thereby accounting for the method's adaptation speed.

Test-time Adaptation

MovieCuts: A New Dataset and Benchmark for Cut Type Recognition

1 code implementation12 Sep 2021 Alejandro Pardo, Fabian Caba Heilbron, Juan León Alcázar, Ali Thabet, Bernard Ghanem

Advances in automatic Cut-type recognition can unleash new experiences in the video editing industry, such as movie analysis for education, video re-editing, virtual cinematography, machine-assisted trailer generation, machine-assisted video editing, among others.

Video Editing Vocal Bursts Type Prediction

Learning to Cut by Watching Movies

1 code implementation ICCV 2021 Alejandro Pardo, Fabian Caba Heilbron, Juan León Alcázar, Ali Thabet, Bernard Ghanem

Video content creation keeps growing at an incredible pace; yet, creating engaging stories remains challenging and requires non-trivial video editing expertise.

Contrastive Learning Video Editing

BAOD: Budget-Aware Object Detection

no code implementations10 Apr 2019 Alejandro Pardo, Mengmeng Xu, Ali Thabet, Pablo Arbelaez, Bernard Ghanem

We adopt a hybrid supervised learning framework to train the object detector from both these types of annotation.

Active Learning Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.