Search Results for author: Pauline Luc

Found 10 papers, 8 papers with code

BootsTAP: Bootstrapped Training for Tracking-Any-Point

2 code implementations1 Feb 2024 Carl Doersch, Yi Yang, Dilara Gokay, Pauline Luc, Skanda Koppula, Ankush Gupta, Joseph Heyward, Ross Goroshin, João Carreira, Andrew Zisserman

To endow models with greater understanding of physics and motion, it is useful to enable them to perceive how solid surfaces move and deform in real scenes.

Zorro: the masked multimodal transformer

1 code implementation23 Jan 2023 Adrià Recasens, Jason Lin, Joāo Carreira, Drew Jaegle, Luyu Wang, Jean-Baptiste Alayrac, Pauline Luc, Antoine Miech, Lucas Smaira, Ross Hemsley, Andrew Zisserman

Attention-based models are appealing for multimodal processing because inputs from multiple modalities can be concatenated and fed to a single backbone network - thus requiring very little fusion engineering.

Audio Tagging Multimodal Deep Learning +2

Transformation-based Adversarial Video Prediction on Large-Scale Data

no code implementations9 Mar 2020 Pauline Luc, Aidan Clark, Sander Dieleman, Diego de Las Casas, Yotam Doron, Albin Cassirer, Karen Simonyan

Recent breakthroughs in adversarial generative modeling have led to models capable of producing video samples of high quality, even on large and complex datasets of real-world video.

Video Generation Video Prediction

Semantic Segmentation using Adversarial Networks

1 code implementation25 Nov 2016 Pauline Luc, Camille Couprie, Soumith Chintala, Jakob Verbeek

Adversarial training has been shown to produce state of the art results for generative image modeling.

Segmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.