Search Results for author: João Carreira

Found 21 papers, 7 papers with code

TAP-Vid: A Benchmark for Tracking Any Point in a Video

1 code implementation7 Nov 2022 Carl Doersch, Ankush Gupta, Larisa Markeeva, Adrià Recasens, Lucas Smaira, Yusuf Aytar, João Carreira, Andrew Zisserman, Yi Yang

Generic motion understanding from video involves not only tracking objects, but also perceiving how their surfaces deform and move.

Optical Flow Estimation

Self-supervised video pretraining yields strong image representations

no code implementations12 Oct 2022 Nikhil Parthasarathy, S. M. Ali Eslami, João Carreira, Olivier J. Hénaff

Videos contain far more information than still images and hold the potential for learning rich representations of the visual world.

Contrastive Learning object-detection +4

Input-level Inductive Biases for 3D Reconstruction

no code implementations CVPR 2022 Wang Yifan, Carl Doersch, Relja Arandjelović, João Carreira, Andrew Zisserman

Much of the recent progress in 3D vision has been driven by the development of specialized architectures that incorporate geometrical inductive biases.

3D Reconstruction Depth Estimation

A Short Note on the Kinetics-700-2020 Human Action Dataset

no code implementations21 Oct 2020 Lucas Smaira, João Carreira, Eric Noland, Ellen Clancy, Amy Wu, Andrew Zisserman

We describe the 2020 edition of the DeepMind Kinetics human action dataset, which replenishes and extends the Kinetics-700 dataset.

The AVA-Kinetics Localized Human Actions Video Dataset

no code implementations1 May 2020 Ang Li, Meghana Thotakuri, David A. Ross, João Carreira, Alexander Vostrikov, Andrew Zisserman

The dataset is collected by annotating videos from the Kinetics-700 dataset using the AVA annotation protocol, and extending the original AVA dataset with these new AVA annotated Kinetics clips.

Action Classification

Visual Grounding in Video for Unsupervised Word Translation

1 code implementation CVPR 2020 Gunnar A. Sigurdsson, Jean-Baptiste Alayrac, Aida Nematzadeh, Lucas Smaira, Mateusz Malinowski, João Carreira, Phil Blunsom, Andrew Zisserman

Given this shared embedding we demonstrate that (i) we can map words between the languages, particularly the 'visual' words; (ii) that the shared embedding provides a good initialization for existing unsupervised text-based word translation techniques, forming the basis for our proposed hybrid visual-text mapping algorithm, MUVE; and (iii) our approach achieves superior performance by addressing the shortcomings of text-based methods -- it is more robust, handles datasets with less commonality, and is applicable to low-resource languages.

Translation Visual Grounding +1

Controllable Attention for Structured Layered Video Decomposition

no code implementations ICCV 2019 Jean-Baptiste Alayrac, João Carreira, Relja Arandjelović, Andrew Zisserman

The objective of this paper is to be able to separate a video into its natural layers, and to control which of the separated layers to attend to.

Action Recognition Reflection Removal

The Visual Centrifuge: Model-Free Layered Video Representations

1 code implementation CVPR 2019 Jean-Baptiste Alayrac, João Carreira, Andrew Zisserman

True video understanding requires making sense of non-lambertian scenes where the color of light arriving at the camera sensor encodes information about not just the last object it collided with, but about multiple mediums -- colored windows, dirty mirrors, smoke or rain.

Color Constancy Video Understanding

Shape and Symmetry Induction for 3D Objects

no code implementations24 Nov 2015 Shubham Tulsiani, Abhishek Kar, Qi-Xing Huang, João Carreira, Jitendra Malik

Actions as simple as grasping an object or navigating around it require a rich understanding of that object's 3D shape from a given viewpoint.

General Classification

Amodal Completion and Size Constancy in Natural Scenes

no code implementations ICCV 2015 Abhishek Kar, Shubham Tulsiani, João Carreira, Jitendra Malik

We consider the problem of enriching current object detection systems with veridical object sizes and relative depth estimates from a single image.

object-detection Object Detection +2

Pose Induction for Novel Object Categories

1 code implementation ICCV 2015 Shubham Tulsiani, João Carreira, Jitendra Malik

We address the task of predicting pose for objects of unannotated object categories from a small seed set of annotated object classes.

Category-Specific Object Reconstruction from a Single Image

no code implementations CVPR 2015 Abhishek Kar, Shubham Tulsiani, João Carreira, Jitendra Malik

Object reconstruction from a single image -- in the wild -- is a problem where we can make progress and get meaningful results today.

object-detection Object Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.