no code implementations • ACL 2021 • Cesar Ilharco, Afsaneh Shirazi, Arjun Gopalan, Arsha Nagrani, Blaz Bratanic, Chris Bregler, Christina Funk, Felipe Ferreira, Gabriel Barcik, Gabriel Ilharco, Georg Osang, Jannis Bulian, Jared Frank, Lucas Smaira, Qin Cao, Ricardo Marino, Roma Patel, Thomas Leung, Vaiva Imbrasaite
How information is created, shared and consumed has changed rapidly in recent decades, in part thanks to new social platforms and technologies on the web.
no code implementations • CVPR 2021 • Avisek Lahiri, Vivek Kwatra, Christian Frueh, John Lewis, Chris Bregler
In this paper, we present a video-based learning framework for animating personalized 3D talking faces from audio.
3 code implementations • 15 Jan 2021 • Shivangi Aneja, Chris Bregler, Matthias Nießner
We propose a self-supervised training strategy where we only need a set of captioned images.
no code implementations • 28 Jan 2019 • Ben Usman, Nick Dufour, Kate Saenko, Chris Bregler
In this work we propose a model that can manipulate individual visual attributes of objects in a real scene using examples of how respective attribute manipulations affect the output of a simulation.
no code implementations • CVPR 2017 • George Papandreou, Tyler Zhu, Nori Kanazawa, Alexander Toshev, Jonathan Tompson, Chris Bregler, Kevin Murphy
Trained on COCO data alone, our final system achieves average precision of 0. 649 on the COCO test-dev set and the 0. 643 test-standard sets, outperforming the winner of the 2016 COCO keypoints challenge and other recent state-of-art.
Ranked #6 on Keypoint Detection on COCO test-challenge
no code implementations • CVPR 2015 • Ahmed Elhayek, Edilson de Aguiar, Arjun Jain, Jonathan Tompson, Leonid Pishchulin, Micha Andriluka, Chris Bregler, Bernt Schiele, Christian Theobalt
Our approach unites a discriminative image-based joint detection method with a model-based generative motion tracking algorithm through a combined pose optimization energy.