no code implementations • 30 May 2024 • Lukas Uzolas, Elmar Eisemann, Petr Kellnhofer
Animation techniques bring digital 3D worlds and characters to life.
1 code implementation • NeurIPS 2023 • Lukas Uzolas, Elmar Eisemann, Petr Kellnhofer
We demonstrate the versatility of our representation on a variety of articulated objects from common datasets and obtain reposable 3D reconstructions without the need of object-specific skeletal templates.
no code implementations • 28 Jun 2022 • Alexander W. Bergman, Petr Kellnhofer, Wang Yifan, Eric R. Chan, David B. Lindell, Gordon Wetzstein
Unsupervised learning of 3D-aware generative adversarial networks (GANs) using only collections of single-view 2D photographs has very recently made much progress.
no code implementations • NeurIPS 2021 • Alexander W. Bergman, Petr Kellnhofer, Gordon Wetzstein
Inspired by neural variants of image-based rendering, we develop a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time.
1 code implementation • CVPR 2021 • Petr Kellnhofer, Lars Jebe, Andrew Jones, Ryan Spicer, Kari Pulli, Gordon Wetzstein
Novel view synthesis is a challenging and ill-posed inverse rendering problem.
3 code implementations • CVPR 2021 • Eric R. Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, Gordon Wetzstein
We have witnessed rapid progress on 3D-aware image synthesis, leveraging recent advances in generative visual models and neural rendering.
Ranked #3 on
Scene Generation
on VizDoom
2 code implementations • ICCV 2019 • Petr Kellnhofer, Adria Recasens, Simon Stent, Wojciech Matusik, Antonio Torralba
Finally, we demonstrate an application of our model for estimating customer attention in a supermarket setting.
Ranked #4 on
Gaze Estimation
on Gaze360
no code implementations • journal 2019 • Subramanian Sundaram, Petr Kellnhofer, Yunzhu Li, Jun-Yan Zhu, Antonio Torralba & Wojciech Matusik
Using a low-cost (about US$10) scalable tactile glove sensor array, we record a large-scale tactile dataset with 135, 000 frames, each covering the full hand, while interacting with 26 different objects.
1 code implementation • 7 Feb 2019 • Alexandre Kaspar, Tae-Hyun Oh, Liane Makatura, Petr Kellnhofer, Jacqueline Aslarus, Wojciech Matusik
Motivated by the recent potential of mass customization brought by whole-garment knitting machines, we introduce the new problem of automatic machine instruction generation using a single image of the desired physical product, which we apply to machine knitting.
1 code implementation • ECCV 2018 • Adrià Recasens, Petr Kellnhofer, Simon Stent, Wojciech Matusik, Antonio Torralba
We introduce a saliency-based distortion layer for convolutional neural networks that helps to improve the spatial sampling of input data for a given task.
no code implementations • ECCV 2018 • Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed Elgharib, Marc Pollefeys, Wojciech Matusik
We present a dataset of thousands of ambient and flash illumination pairs to enable studying flash photography and other applications that can benefit from having separate illuminations.
2 code implementations • CVPR 2016 • Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, Antonio Torralba
We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices.