Search Results for author: Oran Gafni

Found 11 papers, 2 papers with code

IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation

no code implementations13 Feb 2024 Luke Melas-Kyriazi, Iro Laina, Christian Rupprecht, Natalia Neverova, Andrea Vedaldi, Oran Gafni, Filippos Kokkinos

A mitigation is to fine-tune the 2D generator to be multi-view aware, which can help distillation or can be combined with reconstruction networks to output 3D objects directly.

3D Generation 3D Reconstruction +1

Mosaic-SDF for 3D Generative Models

no code implementations14 Dec 2023 Lior Yariv, Omri Puny, Natalia Neverova, Oran Gafni, Yaron Lipman

Current diffusion or flow-based generative models for 3D shapes divide to two: distilling pre-trained 2D image diffusion models, and training directly on 3D shapes.

3D Generation 3D Shape Representation +1

SpaText: Spatio-Textual Representation for Controllable Image Generation

no code implementations CVPR 2023 Omri Avrahami, Thomas Hayes, Oran Gafni, Sonal Gupta, Yaniv Taigman, Devi Parikh, Dani Lischinski, Ohad Fried, Xi Yin

Due to lack of large-scale datasets that have a detailed textual description for each region in the image, we choose to leverage the current large-scale text-to-image datasets and base our approach on a novel CLIP-based spatio-textual representation, and show its effectiveness on two state-of-the-art diffusion models: pixel-based and latent-based.

Text-to-Image Generation

Make-A-Video: Text-to-Video Generation without Text-Video Data

2 code implementations29 Sep 2022 Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, Yaniv Taigman

We propose Make-A-Video -- an approach for directly translating the tremendous recent progress in Text-to-Image (T2I) generation to Text-to-Video (T2V).

Ranked #3 on Text-to-Video Generation on MSR-VTT (CLIP-FID metric)

Image Generation Super-Resolution +2

Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors

1 code implementation24 Mar 2022 Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, Yaniv Taigman

Recent text-to-image generation methods provide a simple yet exciting conversion capability between text and image domains.

Ranked #20 on Text-to-Image Generation on MS COCO (using extra training data)

Semantic Segmentation Text-to-Image Generation

Single-Shot Freestyle Dance Reenactment

no code implementations CVPR 2021 Oran Gafni, Oron Ashual, Lior Wolf

The task of motion transfer between a source dancer and a target person is a special case of the pose transfer problem, in which the target person changes their pose in accordance with the motions of the dancer.

Pose Transfer

Low Bandwidth Video-Chat Compression using Deep Generative Models

no code implementations1 Dec 2020 Maxime Oquab, Pierre Stock, Oran Gafni, Daniel Haziza, Tao Xu, Peizhao Zhang, Onur Celebi, Yana Hasson, Patrick Labatut, Bobo Bose-Kolanu, Thibault Peyronel, Camille Couprie

To unlock video chat for hundreds of millions of people hindered by poor connectivity or unaffordable data costs, we propose to authentically reconstruct faces on the receiver's device using facial landmarks extracted at the sender's side and transmitted over the network.

Wish You Were Here: Context-Aware Human Generation

no code implementations CVPR 2020 Oran Gafni, Lior Wolf

We present a novel method for inserting objects, specifically humans, into existing images, such that they blend in a photorealistic manner, while respecting the semantic context of the scene.

Pose Transfer

Live Face De-Identification in Video

no code implementations ICCV 2019 Oran Gafni, Lior Wolf, Yaniv Taigman

We propose a method for face de-identification that enables fully automatic video modification at high frame rates.

De-identification

Vid2Game: Controllable Characters Extracted from Real-World Videos

no code implementations ICLR 2020 Oran Gafni, Lior Wolf, Yaniv Taigman

The second network maps the current pose, the new pose, and a given background, to an output frame.

Cannot find the paper you are looking for? You can Submit a new open access paper.