no code implementations • 17 Oct 2024 • Jingwei Ma, Erika Lu, Roni Paiss, Shiran Zada, Aleksander Holynski, Tali Dekel, Brian Curless, Michael Rubinstein, Forrester Cole
Panoramic image stitching provides a unified, wide-angle view of a scene that extends beyond the camera's field of view.
no code implementations • 30 Sep 2024 • Bowei Chen, Yifan Wang, Brian Curless, Ira Kemelmacher-Shlizerman, Steven M. Seitz
Given an input painting, we reconstruct a time-lapse video of how it may have been painted.
no code implementations • 27 Aug 2024 • Xiaojuan Wang, Boyang Zhou, Brian Curless, Ira Kemelmacher-Shlizerman, Aleksander Holynski, Steven M. Seitz
We adapt a pretrained large-scale image-to-video diffusion model (originally trained to generate videos moving forward in time from a single input image) for key frame interpolation, i. e., to produce a video in between two input frames.
no code implementations • 26 Apr 2024 • Alice Gao, Samyukta Jayakumar, Marcello Maniglia, Brian Curless, Ira Kemelmacher-Shlizerman, Aaron R. Seitz, Steven M. Seitz
We consider the question of how to best achieve the perception of eye contact when a person is captured by camera and then rendered on a 2D display.
no code implementations • CVPR 2024 • Xiaojuan Wang, Janne Kontkanen, Brian Curless, Steve Seitz, Ira Kemelmacher, Ben Mildenhall, Pratul Srinivasan, Dor Verbin, Aleksander Holynski
We present a method that uses a text-to-image model to generate consistent content across multiple image scales, enabling extreme semantic zooms into a scene, e. g., ranging from a wide-angle landscape view of a forest to a macro shot of an insect sitting on one of the tree branches.
no code implementations • 12 Oct 2023 • Mengyi Shan, Brian Curless, Ira Kemelmacher-Shlizerman, Steve Seitz
We present a system that automatically brings street view imagery to life by populating it with naturally behaving, animated pedestrians and vehicles.
no code implementations • CVPR 2024 • Bowei Chen, Brian Curless, Ira Kemelmacher-Shlizerman, Steven M. Seitz
We present a method to generate full-body selfies from photographs originally taken at arms length.
no code implementations • CVPR 2023 • David Futschik, Kelvin Ritland, James Vecore, Sean Fanello, Sergio Orts-Escolano, Brian Curless, Daniel Sýkora, Rohit Pandey
We introduce light diffusion, a novel method to improve lighting in portraits, softening harsh shadows and specular highlights while preserving overall scene illumination.
no code implementations • CVPR 2023 • Chung-Yi Weng, Pratul P. Srinivasan, Brian Curless, Ira Kemelmacher-Shlizerman
We present PersonNeRF, a method that takes a collection of photos of a subject (e. g. Roger Federer) captured across multiple years with arbitrary body poses and appearances, and enables rendering the subject with arbitrary novel combinations of viewpoint, body pose, and appearance.
no code implementations • 2 Aug 2022 • James Noeckel, Benjamin T. Jones, Karl Willis, Brian Curless, Adriana Schulz
We describe our work on inferring the degrees of freedom between mated parts in mechanical assemblies using deep learning on CAD representations.
no code implementations • CVPR 2022 • Qianqian Wang, Zhengqi Li, David Salesin, Noah Snavely, Brian Curless, Janne Kontkanen
As output, we produce a video that smoothly interpolates the scene motion from the first photo to the second, while also producing camera motion with parallax that gives a heightened sense of 3D.
2 code implementations • 10 Feb 2022 • Fitsum Reda, Janne Kontkanen, Eric Tabellion, Deqing Sun, Caroline Pantofaru, Brian Curless
Recent methods use multiple networks to estimate optical flow or depth and a separate network dedicated to frame synthesis.
Ranked #2 on
Video Frame Interpolation
on Middlebury
(SSIM metric)
1 code implementation • CVPR 2022 • Chung-Yi Weng, Brian Curless, Pratul P. Srinivasan, Jonathan T. Barron, Ira Kemelmacher-Shlizerman
Our method optimizes for a volumetric representation of the person in a canonical T-pose, in concert with a motion field that maps the estimated canonical representation to every frame of the video via backward warps.
no code implementations • ICCV 2021 • Varun Jampani, Huiwen Chang, Kyle Sargent, Abhishek Kar, Richard Tucker, Michael Krainin, Dominik Kaeser, William T. Freeman, David Salesin, Brian Curless, Ce Liu
We present SLIDE, a modular and unified system for single image 3D photography that uses a simple yet effective soft layering strategy to better preserve appearance details in novel views.
1 code implementation • 21 Jul 2021 • James Noeckel, Haisen Zhao, Brian Curless, Adriana Schulz
We propose a novel method to generate fabrication blueprints from images of carpentered items.
no code implementations • ICCV 2021 • Soumyadip Sengupta, Brian Curless, Ira Kemelmacher-Shlizerman, Steve Seitz
Whereas existing light stages require expensive, room-scale spherical capture gantries and exist in only a few labs in the world, we demonstrate how to acquire useful data from a normal TV or desktop monitor.
no code implementations • 23 Dec 2020 • Chung-Yi Weng, Brian Curless, Ira Kemelmacher-Shlizerman
At the core of our method is a volumetric 3D human representation reconstructed with a deep network trained on input video, enabling novel pose/view synthesis.
no code implementations • CVPR 2021 • Edward Zhang, Ricardo Martin-Brualla, Janne Kontkanen, Brian Curless
Removing objects from images is a challenging problem that is important for many applications, including mixed reality.
2 code implementations • CVPR 2021 • Shanchuan Lin, Andrey Ryabtsev, Soumyadip Sengupta, Brian Curless, Steve Seitz, Ira Kemelmacher-Shlizerman
We introduce a real-time, high-resolution background replacement technique which operates at 30fps in 4K resolution, and 60fps for HD on a modern GPU.
no code implementations • CVPR 2021 • Aleksander Holynski, Brian Curless, Steven M. Seitz, Richard Szeliski
In this paper, we demonstrate a fully automatic method for converting a still image into a realistic animated looping video.
2 code implementations • ECCV 2020 • Luyang Zhu, Konstantinos Rematas, Brian Curless, Steve Seitz, Ira Kemelmacher-Shlizerman
Based on these models, we introduce a new method that takes as input a single photo of a clothed player in any basketball pose and outputs a high resolution mesh and 3D pose for that player.
no code implementations • ECCV 2020 • Yifan Wang, Brian Curless, Steve Seitz
By analyzing the motion of people and other objects in a scene, we demonstrate how to infer depth, occlusion, lighting, and shadow information from video taken from a single camera viewpoint.
1 code implementation • CVPR 2020 • Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steve Seitz, Ira Kemelmacher-Shlizerman
To bridge the domain gap to real imagery with no labeling, we train another matting network guided by the first network and by a discriminator that judges the quality of composites.
Ranked #1 on
Image Matting
on Adobe Matting
no code implementations • 8 Jun 2019 • Chris Sweeney, Aleksander Holynski, Brian Curless, Steve M Seitz
We present a novel Structure from Motion pipeline that is capable of reconstructing accurate camera poses for panorama-style video capture without prior camera intrinsic calibration.
no code implementations • CVPR 2019 • Chung-Yi Weng, Brian Curless, Ira Kemelmacher-Shlizerman
The key contributions of this paper are: 1) an application of viewing and animating humans in single photos in 3D, 2) a novel 2D warping method to deform a posable template body model to fit the person's complex silhouette to create an animatable mesh, and 3) a method for handling partial self occlusions.
no code implementations • CVPR 2018 • Konstantinos Rematas, Ira Kemelmacher-Shlizerman, Brian Curless, Steve Seitz
We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device.
no code implementations • CVPR 2018 • Edward Zhang, Michael F. Cohen, Brian Curless
Given the geometry, materials, and illuminated appearance of the scene, the light localization problem is to completely recover the number, positions, and intensities of the lights.
no code implementations • CVPR 2015 • Juliet Fiss, Brian Curless, Rick Szeliski
In this paper, we use matting to separate foreground layers from light fields captured with a plenoptic camera.
no code implementations • CVPR 2014 • Qi Shan, Brian Curless, Yasutaka Furukawa, Carlos Hernandez, Steven M. Seitz
The proposed approach outperforms state of the art MVS techniques for challenging Internet datasets, yielding dramatic quality improvements both around object contours and in surface detail.