Search Results for author: Patrick Bouthemy

Found 9 papers, 2 papers with code

Better Exploiting Motion for Better Action Recognition

no code implementations CVPR 2013 Mihir Jain, Herve Jegou, Patrick Bouthemy

Several recent works on action recognition have attested the importance of explicitly integrating motion characteristics in the video description.

Action Recognition Image Retrieval +3

Action Localization with Tubelets from Motion

no code implementations CVPR 2014 Mihir Jain, Jan van Gemert, Herve Jegou, Patrick Bouthemy, Cees G. M. Snoek

Our approach significantly outperforms the state-of-the-art on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.

Action Localization

Aggregation of local parametric candidates with exemplar-based occlusion handling for optical flow

no code implementations22 Jul 2014 Denis Fortun, Patrick Bouthemy, Charles Kervrann

The idea is to supply local motion candidates at every pixel in a first step, and then to combine them to determine the global optical flow field in a second step.

Occlusion Handling Optical Flow Estimation

Determining Occlusions From Space and Time Image Reconstructions

no code implementations CVPR 2016 Juan-Manuel Perez-Rua, Tomas Crivelli, Patrick Bouthemy, Patrick Perez

With this in mind, we propose a novel approach to occlusion detection where visibility or not of a point in next frame is formulated in terms of visual reconstruction.

Tubelets: Unsupervised action proposals from spatiotemporal super-voxels

no code implementations7 Jul 2016 Mihir Jain, Jan van Gemert, Hervé Jégou, Patrick Bouthemy, Cees G. M. Snoek

First, inspired by selective search for object proposals, we introduce an approach to generate action proposals from spatiotemporal super-voxels in an unsupervised manner, we call them Tubelets.

Action Localization

Learning how to be robust: Deep polynomial regression

no code implementations17 Apr 2018 Juan-Manuel Perez-Rua, Tomas Crivelli, Patrick Bouthemy, Patrick Perez

We bypass the need for a tailored loss function on the regression parameters by attaching to our model a differentiable hard-wired decoder corresponding to the polynomial operation at hand.

regression Video Stabilization

EM-driven unsupervised learning for efficient motion segmentation

1 code implementation6 Jan 2022 Etienne Meunier, Anaïs Badoual, Patrick Bouthemy

The core idea of our work is to leverage the Expectation-Maximization (EM) framework in order to design in a well-founded manner a loss function and a training procedure of our motion segmentation neural network that does not require either ground-truth or manual annotation.

Data Augmentation Motion Segmentation +3

Unsupervised Space-Time Network for Temporally-Consistent Segmentation of Multiple Motions

1 code implementation CVPR 2023 Etienne Meunier, Patrick Bouthemy

In this paper, we propose an original unsupervised spatio-temporal framework for motion segmentation from optical flow that fully investigates the temporal dimension of the problem.

Motion Segmentation Optical Flow Estimation +1

Unsupervised motion segmentation in one go: Smooth long-term model over a video

no code implementations2 Oct 2023 Etienne Meunier, Patrick Bouthemy

The loss function combines a flow reconstruction term involving spatio-temporal parametric motion models combining, in a novel way, polynomial (quadratic) motion models for the $(x, y)$-spatial dimensions and B-splines for the time dimension of the video sequence, and a regularization term enforcing temporal consistency on the masks.

Motion Segmentation Optical Flow Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.