Search Results for author: Aleksander Holynski

Found 19 papers, 5 papers with code

Structure from Motion for Panorama-Style Videos

no code implementations8 Jun 2019 Chris Sweeney, Aleksander Holynski, Brian Curless, Steve M Seitz

We present a novel Structure from Motion pipeline that is capable of reconstructing accurate camera poses for panorama-style video capture without prior camera intrinsic calibration.

Seeing the World in a Bag of Chips

no code implementations CVPR 2020 Jeong Joon Park, Aleksander Holynski, Steve Seitz

We address the dual problems of novel view synthesis and environment reconstruction from hand-held RGBD sensors.

Novel View Synthesis

Reducing Drift in Structure From Motion Using Extended Features

no code implementations27 Aug 2020 Aleksander Holynski, David Geraghty, Jan-Michael Frahm, Chris Sweeney, Richard Szeliski

Low-frequency long-range errors (drift) are an endemic problem in 3D structure from motion, and can often hamper reasonable reconstructions of the scene.

Animating Pictures with Eulerian Motion Fields

no code implementations CVPR 2021 Aleksander Holynski, Brian Curless, Steven M. Seitz, Richard Szeliski

In this paper, we demonstrate a fully automatic method for converting a still image into a realistic animated looping video.

Image-to-Image Translation Translation

SunStage: Portrait Reconstruction and Relighting using the Sun as a Light Stage

no code implementations CVPR 2023 Yifan Wang, Aleksander Holynski, Xiuming Zhang, Xuaner Zhang

Our method only requires the user to capture a selfie video outdoors, rotating in place, and uses the varying angles between the sun and the face as guidance in joint reconstruction of facial geometry, reflectance, camera pose, and lighting parameters.

Novel View Synthesis

InstructPix2Pix: Learning to Follow Image Editing Instructions

5 code implementations CVPR 2023 Tim Brooks, Aleksander Holynski, Alexei A. Efros

We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image.

Language Modelling Text-based Image Editing

Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs

1 code implementation ICCV 2023 Frederik Warburg, Ethan Weber, Matthew Tancik, Aleksander Holynski, Angjoo Kanazawa

Casually captured Neural Radiance Fields (NeRFs) suffer from artifacts such as floaters or flawed geometry when rendered outside the camera trajectory.

Novel View Synthesis

Generative Image Dynamics

no code implementations14 Sep 2023 Zhengqi Li, Richard Tucker, Noah Snavely, Aleksander Holynski

We present an approach to modeling an image-space prior on scene motion.

RealFill: Reference-Driven Generation for Authentic Image Completion

no code implementations28 Sep 2023 Luming Tang, Nataniel Ruiz, Qinghao Chu, Yuanzhen Li, Aleksander Holynski, David E. Jacobs, Bharath Hariharan, Yael Pritch, Neal Wadhwa, Kfir Aberman, Michael Rubinstein

Once personalized, RealFill is able to complete a target image with visually compelling contents that are faithful to the original scene.

State of the Art on Diffusion Models for Visual Computing

no code implementations11 Oct 2023 Ryan Po, Wang Yifan, Vladislav Golyanik, Kfir Aberman, Jonathan T. Barron, Amit H. Bermano, Eric Ryan Chan, Tali Dekel, Aleksander Holynski, Angjoo Kanazawa, C. Karen Liu, Lingjie Liu, Ben Mildenhall, Matthias Nießner, Björn Ommer, Christian Theobalt, Peter Wonka, Gordon Wetzstein

The field of visual computing is rapidly advancing due to the emergence of generative artificial intelligence (AI), which unlocks unprecedented capabilities for the generation, editing, and reconstruction of images, videos, and 3D scenes.

Generative Powers of Ten

no code implementations4 Dec 2023 Xiaojuan Wang, Janne Kontkanen, Brian Curless, Steve Seitz, Ira Kemelmacher, Ben Mildenhall, Pratul Srinivasan, Dor Verbin, Aleksander Holynski

We present a method that uses a text-to-image model to generate consistent content across multiple image scales, enabling extreme semantic zooms into a scene, e. g., ranging from a wide-angle landscape view of a forest to a macro shot of an insect sitting on one of the tree branches.

Image Super-Resolution

Readout Guidance: Learning Control from Diffusion Features

no code implementations4 Dec 2023 Grace Luo, Trevor Darrell, Oliver Wang, Dan B Goldman, Aleksander Holynski

We present Readout Guidance, a method for controlling text-to-image diffusion models with learned signals.

Cannot find the paper you are looking for? You can Submit a new open access paper.