no code implementations • 2 Oct 2023 • Etienne Meunier, Patrick Bouthemy
Human beings have the ability to continuously analyze a video and immediately extract the motion components.
1 code implementation • CVPR 2023 • Etienne Meunier, Patrick Bouthemy
In this paper, we propose an original unsupervised spatio-temporal framework for motion segmentation from optical flow that fully investigates the temporal dimension of the problem.
1 code implementation • 6 Jan 2022 • Etienne Meunier, Anaïs Badoual, Patrick Bouthemy
The core idea of our work is to leverage the Expectation-Maximization (EM) framework in order to design in a well-founded manner a loss function and a training procedure of our motion segmentation neural network that does not require either ground-truth or manual annotation.
Ranked #6 on Unsupervised Object Segmentation on DAVIS 2016
no code implementations • 17 Apr 2018 • Juan-Manuel Perez-Rua, Tomas Crivelli, Patrick Bouthemy, Patrick Perez
We bypass the need for a tailored loss function on the regression parameters by attaching to our model a differentiable hard-wired decoder corresponding to the polynomial operation at hand.
no code implementations • 7 Jul 2016 • Mihir Jain, Jan van Gemert, Hervé Jégou, Patrick Bouthemy, Cees G. M. Snoek
First, inspired by selective search for object proposals, we introduce an approach to generate action proposals from spatiotemporal super-voxels in an unsupervised manner, we call them Tubelets.
no code implementations • CVPR 2016 • Juan-Manuel Perez-Rua, Tomas Crivelli, Patrick Bouthemy, Patrick Perez
With this in mind, we propose a novel approach to occlusion detection where visibility or not of a point in next frame is formulated in terms of visual reconstruction.
no code implementations • 22 Jul 2014 • Denis Fortun, Patrick Bouthemy, Charles Kervrann
The idea is to supply local motion candidates at every pixel in a first step, and then to combine them to determine the global optical flow field in a second step.
no code implementations • CVPR 2014 • Mihir Jain, Jan van Gemert, Herve Jegou, Patrick Bouthemy, Cees G. M. Snoek
Our approach significantly outperforms the state-of-the-art on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.
no code implementations • CVPR 2013 • Mihir Jain, Herve Jegou, Patrick Bouthemy
Several recent works on action recognition have attested the importance of explicitly integrating motion characteristics in the video description.