Search Results for author: Apoorva Beedu

Found 6 papers, 1 papers with code

On the Efficacy of Text-Based Input Modalities for Action Anticipation

no code implementations23 Jan 2024 Apoorva Beedu, Karan Samel, Irfan Essa

Compared to existing methods, MAT has the advantage of learning additional environmental context from two kinds of text inputs: action descriptions during the pre-training stage, and the text inputs for detected objects and actions during modality feature fusion.

Action Anticipation

Multi-Stage Based Feature Fusion of Multi-Modal Data for Human Activity Recognition

no code implementations8 Nov 2022 Hyeongju Choi, Apoorva Beedu, Harish Haresamudram, Irfan Essa

In this work, we propose a multi-modal framework that learns to effectively combine features from RGB Video and IMU sensors, and show its robustness for MMAct and UTD-MHAD datasets.

Human Activity Recognition

End-to-End Multimodal Representation Learning for Video Dialog

no code implementations26 Oct 2022 Huda Alamri, Anthony Bilic, Michael Hu, Apoorva Beedu, Irfan Essa

Video-based dialog task is a challenging multimodal learning task that has received increasing attention over the past few years with state-of-the-art obtaining new performance records.

Representation Learning Retrieval

Video based Object 6D Pose Estimation using Transformers

1 code implementation24 Oct 2022 Apoorva Beedu, Huda Alamri, Irfan Essa

We introduce a Transformer based 6D Object Pose Estimation framework VideoPose, comprising an end-to-end attention based modelling architecture, that attends to previous frames in order to estimate accurate 6D Object Poses in videos.

6D Pose Estimation 6D Pose Estimation using RGB +1

VideoPose: Estimating 6D object pose from videos

no code implementations20 Nov 2021 Apoorva Beedu, Zhile Ren, Varun Agrawal, Irfan Essa

We introduce a simple yet effective algorithm that uses convolutional neural networks to directly estimate object poses from videos.

Object Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.