Search Results for author: Maneesh Agrawala

Found 13 papers, 3 papers with code

Tree-Structured Shading Decomposition

no code implementations ICCV 2023 Chen Geng, Hong-Xing Yu, Sharon Zhang, Maneesh Agrawala, Jiajun Wu

The shade tree representation enables novice users who are unfamiliar with the physical shading process to edit object shading in an efficient and intuitive manner.

Automated Conversion of Music Videos into Lyric Videos

no code implementations28 Aug 2023 Jiaju Ma, Anyi Rao, Li-Yi Wei, Rubaiat Habib Kazi, Hijung Valentina Shin, Maneesh Agrawala

Musicians and fans often produce lyric videos, a form of music videos that showcase the song's lyrics, for their favorite songs.

Adding Conditional Control to Text-to-Image Diffusion Models

3 code implementations ICCV 2023 Lvmin Zhang, Anyi Rao, Maneesh Agrawala

ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls.

Image Generation

AGQA 2.0: An Updated Benchmark for Compositional Spatio-Temporal Reasoning

no code implementations12 Apr 2022 Madeleine Grunde-McLaughlin, Ranjay Krishna, Maneesh Agrawala

Prior benchmarks have analyzed models' answers to questions about videos in order to measure visual compositional reasoning.

Question Answering

Disentangled3D: Learning a 3D Generative Model with Disentangled Geometry and Appearance from Monocular Images

no code implementations CVPR 2022 Ayush Tewari, Mallikarjun B R, Xingang Pan, Ohad Fried, Maneesh Agrawala, Christian Theobalt

Our model can disentangle the geometry and appearance variations in the scene, i. e., we can independently sample from the geometry and appearance spaces of the generative model.


Iterative Text-based Editing of Talking-heads Using Neural Retargeting

no code implementations21 Nov 2020 Xinwei Yao, Ohad Fried, Kayvon Fatahalian, Maneesh Agrawala

We present a text-based tool for editing talking-head video that enables an iterative editing workflow.

State of the Art on Neural Rendering

no code implementations8 Apr 2020 Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer

Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.

BIG-bench Machine Learning Image Generation +2

Rekall: Specifying Video Events using Compositions of Spatiotemporal Labels

1 code implementation7 Oct 2019 Daniel Y. Fu, Will Crichton, James Hong, Xinwei Yao, Haotian Zhang, Anh Truong, Avanika Narayan, Maneesh Agrawala, Christopher Ré, Kayvon Fatahalian

Many real-world video analysis applications require the ability to identify domain-specific events in video, such as interviews and commercials in TV news broadcasts, or action sequences in film.

Text-based Editing of Talking-head Video

1 code implementation4 Jun 2019 Ohad Fried, Ayush Tewari, Michael Zollhöfer, Adam Finkelstein, Eli Shechtman, Dan B. Goldman, Kyle Genova, Zeyu Jin, Christian Theobalt, Maneesh Agrawala

To edit a video, the user has to only edit the transcript, and an optimization strategy then chooses segments of the input corpus as base material.

Face Model Talking Head Generation +2

SceneSuggest: Context-driven 3D Scene Design

no code implementations28 Feb 2017 Manolis Savva, Angel X. Chang, Maneesh Agrawala

We present SceneSuggest: an interactive 3D scene design system providing context-driven suggestions for 3D model retrieval and placement.

Graphics Human-Computer Interaction

How do people explore virtual environments?

no code implementations13 Dec 2016 Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh Agrawala, Diego Gutierrez, Belen Masia, Gordon Wetzstein

Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention.

Cannot find the paper you are looking for? You can Submit a new open access paper.