Search Results for author: Yael Pritch

Found 15 papers, 6 papers with code

ObjectDrop: Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion

no code implementations27 Mar 2024 Daniel Winter, Matan Cohen, Shlomi Fruchter, Yael Pritch, Alex Rav-Acha, Yedid Hoshen

To tackle this challenge, we propose bootstrap supervision; leveraging our object removal model trained on a small counterfactual dataset, we synthetically expand this dataset considerably.

counterfactual Object

RealFill: Reference-Driven Generation for Authentic Image Completion

no code implementations28 Sep 2023 Luming Tang, Nataniel Ruiz, Qinghao Chu, Yuanzhen Li, Aleksander Holynski, David E. Jacobs, Bharath Hariharan, Yael Pritch, Neal Wadhwa, Kfir Aberman, Michael Rubinstein

Once personalized, RealFill is able to complete a target image with visually compelling contents that are faithful to the original scene.

Computational Long Exposure Mobile Photography

no code implementations2 Aug 2023 Eric Tabellion, Nikhil Karnad, Noa Glaser, Ben Weiss, David E. Jacobs, Yael Pritch

Background blur images, also called panning photography, are captured while the camera is tracking a moving subject, to produce an image of a sharp subject over a background blurred by relative motion.

HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models

2 code implementations13 Jul 2023 Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Wei Wei, Tingbo Hou, Yael Pritch, Neal Wadhwa, Michael Rubinstein, Kfir Aberman

By composing these weights into the diffusion model, coupled with fast finetuning, HyperDreamBooth can generate a person's face in various contexts and styles, with high subject details while also preserving the model's crucial knowledge of diverse styles and semantic modifications.

Diffusion Personalization Tuning Free

Dreamix: Video Diffusion Models are General Video Editors

no code implementations2 Feb 2023 Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, Yedid Hoshen

Our approach uses a video diffusion model to combine, at inference time, the low-resolution spatio-temporal information from the original video with new, high resolution information that it synthesized to align with the guiding text prompt.

Image Animation Image to Video Generation +3

Null-text Inversion for Editing Real Images using Guided Diffusion Models

4 code implementations CVPR 2023 Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, Daniel Cohen-Or

Our Null-text inversion, based on the publicly available Stable Diffusion model, is extensively evaluated on a variety of images and prompt editing, showing high-fidelity editing of real images.

Image Generation Text-based Image Editing

DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation

10 code implementations CVPR 2023 Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, Kfir Aberman

Once the subject is embedded in the output domain of the model, the unique identifier can be used to synthesize novel photorealistic images of the subject contextualized in different scenes.

Diffusion Personalization Image Generation

Prompt-to-Prompt Image Editing with Cross Attention Control

7 code implementations2 Aug 2022 Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, Daniel Cohen-Or

Editing is challenging for these generative models, since an innate property of an editing technique is to preserve most of the original image, while in the text-based models, even a small modification of the text prompt often leads to a completely different outcome.

Image Generation Text-based Image Editing

MyStyle: A Personalized Generative Prior

no code implementations31 Mar 2022 Yotam Nitzan, Kfir Aberman, Qiurui He, Orly Liba, Michal Yarom, Yossi Gandelsman, Inbar Mosseri, Yael Pritch, Daniel Cohen-Or

Given a small reference set of portrait images of a person (~100), we tune the weights of a pretrained StyleGAN face generator to form a local, low-dimensional, personalized manifold in the latent space.

Image Enhancement Super-Resolution

Deep Saliency Prior for Reducing Visual Distraction

no code implementations CVPR 2022 Kfir Aberman, Junfeng He, Yossi Gandelsman, Inbar Mosseri, David E. Jacobs, Kai Kohlhoff, Yael Pritch, Michael Rubinstein

Using only a model that was trained to predict where people look at images, and no additional training data, we can produce a range of powerful editing effects for reducing distraction in images.

Sky Optimization: Semantically aware image processing of skies in low-light photography

1 code implementation15 Jun 2020 Orly Liba, Longqi Cai, Yun-Ta Tsai, Elad Eban, Yair Movshovitz-Attias, Yael Pritch, Huizhong Chen, Jonathan T. Barron

The sky is a major component of the appearance of a photograph, and its color and tone can strongly influence the mood of a picture.

Handheld Mobile Photography in Very Low Light

no code implementations24 Oct 2019 Orly Liba, Kiran Murthy, Yun-Ta Tsai, Tim Brooks, Tianfan Xue, Nikhil Karnad, Qiurui He, Jonathan T. Barron, Dillon Sharlet, Ryan Geiss, Samuel W. Hasinoff, Yael Pritch, Marc Levoy

Aside from the physical limits imposed by read noise and photon shot noise, these cameras are typically handheld, have small apertures and sensors, use mass-produced analog electronics that cannot easily be cooled, and are commonly used to photograph subjects that move, like children and pets.

Tone Mapping

Megastereo: Constructing High-Resolution Stereo Panoramas

no code implementations CVPR 2013 Christian Richardt, Yael Pritch, Henning Zimmer, Alexander Sorkine-Hornung

As our first contribution, we describe the necessary correction steps and a compact representation for the input images in order to achieve a highly accurate approximation to the required ray space.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.