Search Results for author: Pavel Tokmakov

Found 17 papers, 6 papers with code

Object Permanence Emerges in a Random Walk along Memory

1 code implementation4 Apr 2022 Pavel Tokmakov, Allan Jabri, Jie Li, Adrien Gaidon

This paper proposes a self-supervised objective for learning representations that localize objects under occlusion - a property known as object permanence.

Discovering Objects that Can Move

1 code implementation CVPR 2022 Zhipeng Bao, Pavel Tokmakov, Allan Jabri, Yu-Xiong Wang, Adrien Gaidon, Martial Hebert

Our experiments demonstrate that, despite only capturing a small subset of the objects that move, this signal is enough to generalize to segment both moving and static instances of dynamic objects.

Motion Segmentation Object Discovery

Learning to Track with Object Permanence

1 code implementation ICCV 2021 Pavel Tokmakov, Jie Li, Wolfram Burgard, Adrien Gaidon

In this work, we introduce an end-to-end trainable approach for joint object detection and tracking that is capable of such reasoning.

Multi-Object Tracking object-detection +2

Learning to Track Any Object

no code implementations25 Oct 2019 Achal Dave, Pavel Tokmakov, Cordelia Schmid, Deva Ramanan

Moreover, at test time the same network can be applied to detection and tracking, resulting in a unified approach for the two tasks.

Instance Segmentation Object Tracking +4

A Study on Action Detection in the Wild

no code implementations29 Apr 2019 Yubo Zhang, Pavel Tokmakov, Martial Hebert, Cordelia Schmid

In this work we study the problem of action detection in a highly-imbalanced dataset.

Action Detection

Towards Segmenting Anything That Moves

no code implementations11 Feb 2019 Achal Dave, Pavel Tokmakov, Deva Ramanan

To address this concern, we propose two new benchmarks for generic, moving object detection, and show that our model matches top-down methods on common categories, while significantly out-performing both top-down and bottom-up methods on never-before-seen categories.

Action Detection Instance Segmentation +7

Learning Compositional Representations for Few-Shot Recognition

no code implementations ICCV 2019 Pavel Tokmakov, Yu-Xiong Wang, Martial Hebert

One of the key limitations of modern deep learning approaches lies in the amount of data required to train them.

Few-Shot Learning

A Structured Model For Action Detection

no code implementations CVPR 2019 Yubo Zhang, Pavel Tokmakov, Martial Hebert, Cordelia Schmid

A dominant paradigm for learning-based approaches in computer vision is training generic models, such as ResNet for image recognition, or I3D for video understanding, on large datasets and allowing them to discover the optimal representation for the problem at hand.

Action Detection Computer Vision +1

Learning to Segment Moving Objects

no code implementations1 Dec 2017 Pavel Tokmakov, Cordelia Schmid, Karteek Alahari

We formulate this as a learning problem and design our framework with three cues: (i) independent object motion between a pair of frames, which complements object recognition, (ii) object appearance, which helps to correct errors in motion estimation, and (iii) temporal consistency, which imposes additional constraints on the segmentation.

Motion Estimation Motion Segmentation +3

Learning Video Object Segmentation with Visual Memory

no code implementations ICCV 2017 Pavel Tokmakov, Karteek Alahari, Cordelia Schmid

The module to build a "visual memory" in video, i. e., a joint representation of all the video frames, is realized with a convolutional recurrent unit learned from a small number of training video sequences.

Motion Segmentation Semantic Segmentation +2

Learning Motion Patterns in Videos

no code implementations CVPR 2017 Pavel Tokmakov, Karteek Alahari, Cordelia Schmid

The problem of determining whether an object is in motion, irrespective of camera motion, is far from being solved.

Ranked #22 on Unsupervised Video Object Segmentation on DAVIS 2016 (using extra training data)

Motion Segmentation Optical Flow Estimation +2

Weakly-Supervised Semantic Segmentation using Motion Cues

no code implementations23 Mar 2016 Pavel Tokmakov, Karteek Alahari, Cordelia Schmid

We also demonstrate that the performance of M-CNN learned with 150 weak video annotations is on par with state-of-the-art weakly-supervised methods trained with thousands of images.

Weakly-Supervised Semantic Segmentation

Relational Linear Programs

no code implementations12 Oct 2014 Kristian Kersting, Martin Mladenov, Pavel Tokmakov

A relational linear program (RLP) is a declarative LP template defining the objective and the constraints through the logical concepts of objects, relations, and quantified variables.

Cannot find the paper you are looking for? You can Submit a new open access paper.