no code implementations • 7 Jan 2023 • Mihir Jain, Kashish Jain, Sandip Mane
In mass manufacturing of jewellery, the gross loss is estimated before manufacturing to calculate the wax weight of the pattern that would be investment casted to make multiple identical pieces of jewellery.
no code implementations • ICCV 2021 • Kirill Gavrilyuk, Mihir Jain, Ilia Karmanov, Cees G. M. Snoek
With the motion model we generate pseudo-labels for a large unlabeled video collection, which enables us to transfer knowledge by learning to predict these pseudo-labels with an appearance model.
no code implementations • ICCV 2021 • HanUl Kim, Mihir Jain, Jun-Tae Lee, Sungrack Yun, Fatih Porikli
Efficient action recognition has become crucial to extend the success of action recognition to many real-world applications.
no code implementations • ICLR 2021 • Jun-Tae Lee, Mihir Jain, Hyoungwoo Park, Sungrack Yun
Temporally localizing actions in videos is one of the key components for video understanding.
no code implementations • CVPR 2020 • Mihir Jain, Amir Ghodrati, Cees G. M. Snoek
Different from existing works, which all use annotated untrimmed videos during training, we learn only from short trimmed videos.
no code implementations • 3 Apr 2020 • Noureldien Hussein, Mihir Jain, Babak Ehteshami Bejnordi
When recognizing a long-range activity, exploring the entire video is exhaustive and computationally expensive, as it can span up to a few minutes.
no code implementations • 18 Dec 2018 • Mihir Jain, Kyle Brown, Ahmed K. Sadek
Predicting the behavior of surrounding vehicles is a critical problem in automated driving.
2 code implementations • 5 Apr 2018 • Victor Escorcia, Cuong D. Dao, Mihir Jain, Bernard Ghanem, Cees Snoek
Second, we propose an actor-based attention mechanism that enables the localization of the actions from action class labels and actor proposals and is end-to-end trainable.
no code implementations • 7 Jul 2016 • Mihir Jain, Jan van Gemert, Hervé Jégou, Patrick Bouthemy, Cees G. M. Snoek
First, inspired by selective search for object proposals, we introduce an approach to generate action proposals from spatiotemporal super-voxels in an unsupervised manner, we call them Tubelets.
1 code implementation • 6 Jul 2016 • Zhenyang Li, Efstratios Gavves, Mihir Jain, Cees G. M. Snoek
We present a new architecture for end-to-end sequence learning of actions in video, we call VideoLSTM.
no code implementations • ICCV 2015 • Mihir Jain, Jan C. van Gemert, Thomas Mensink, Cees G. M. Snoek
Our key contribution is objects2action, a semantic word embedding that is spanned by a skip-gram model of thousands of object categories.
Ranked #17 on
Zero-Shot Action Recognition
on UCF101
no code implementations • CVPR 2015 • Mihir Jain, Jan C. van Gemert, Cees G. M. Snoek
This paper contributes to automatic classification and localization of human actions in video.
no code implementations • CVPR 2014 • Mihir Jain, Jan van Gemert, Herve Jegou, Patrick Bouthemy, Cees G. M. Snoek
Our approach significantly outperforms the state-of-the-art on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.
no code implementations • CVPR 2013 • Mihir Jain, Herve Jegou, Patrick Bouthemy
Several recent works on action recognition have attested the importance of explicitly integrating motion characteristics in the video description.