Search Results for author: Pauline Luc

Found 9 papers, 7 papers with code

Zorro: the masked multimodal transformer

1 code implementation23 Jan 2023 Adrià Recasens, Jason Lin, Joāo Carreira, Drew Jaegle, Luyu Wang, Jean-Baptiste Alayrac, Pauline Luc, Antoine Miech, Lucas Smaira, Ross Hemsley, Andrew Zisserman

Attention-based models are appealing for multimodal processing because inputs from multiple modalities can be concatenated and fed to a single backbone network - thus requiring very little fusion engineering.

Audio Tagging Multimodal Deep Learning +2

Transformation-based Adversarial Video Prediction on Large-Scale Data

no code implementations9 Mar 2020 Pauline Luc, Aidan Clark, Sander Dieleman, Diego de Las Casas, Yotam Doron, Albin Cassirer, Karen Simonyan

Recent breakthroughs in adversarial generative modeling have led to models capable of producing video samples of high quality, even on large and complex datasets of real-world video.

Test Video Generation +1

Semantic Segmentation using Adversarial Networks

1 code implementation25 Nov 2016 Pauline Luc, Camille Couprie, Soumith Chintala, Jakob Verbeek

Adversarial training has been shown to produce state of the art results for generative image modeling.

Segmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.