Object State Change Classification

6 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Object State Change Classification models and implementations
2 papers
222

Datasets


Most implemented papers

Egocentric Video-Language Pretraining

showlab/egovlp 3 Jun 2022

Video-Language Pretraining (VLP), which aims to learn transferable representation to advance a wide range of video-text downstream tasks, has recently received increasing attention.

EVEREST: Efficient Masked Video Autoencoder by Removing Redundant Spatiotemporal Tokens

sunilhoho/everest 19 Nov 2022

Masked Video Autoencoder (MVA) approaches have demonstrated their potential by significantly outperforming previous video representation learning methods.

Egocentric Video-Language Pretraining @ Ego4D Challenge 2022

showlab/egovlp 4 Jul 2022

In this report, we propose a video-language pretraining (VLP) based solution \cite{kevin2022egovlp} for four Ego4D challenge tasks, including Natural Language Query (NLQ), Moment Query (MQ), Object State Change Classification (OSCC), and PNR Localization (PNR).

Object State Change Classification in Egocentric Videos using the Divided Space-Time Attention Mechanism

md-mohaiminul/objectstatechange 24 Jul 2022

This report describes our submission called "TarHeels" for the Ego4D: Object State Change Classification Challenge.

Learning State-Aware Visual Representations from Audible Interactions

HimangiM/RepLAI 27 Sep 2022

However, learning representations from videos can be challenging.

Masked Autoencoders for Egocentric Video Understanding @ Ego4D Challenge 2022

jasonrayshd/egomotion 18 Nov 2022

In this report, we present our approach and empirical results of applying masked autoencoders in two egocentric video understanding tasks, namely, Object State Change Classification and PNR Temporal Localization, of Ego4D Challenge 2022.