Search Results for author: Lior Bracha

Found 4 papers, 3 papers with code

GD-VDM: Generated Depth for better Diffusion-based Video Generation

1 code implementation19 Jun 2023 Ariel Lapid, Idan Achituve, Lior Bracha, Ethan Fetaya

GD-VDM is based on a two-phase generation process involving generating depth videos followed by a novel diffusion Vid2Vid model that generates a coherent real-world video.

Image Generation Video Generation

LipVoicer: Generating Speech from Silent Videos Guided by Lip Reading

1 code implementation5 Jun 2023 Yochai Yemini, Aviv Shamsian, Lior Bracha, Sharon Gannot, Ethan Fetaya

We then condition a diffusion model on the video and use the extracted text through a classifier-guidance mechanism where a pre-trained ASR serves as the classifier.

Lip Reading

DisCLIP: Open-Vocabulary Referring Expression Generation

no code implementations30 May 2023 Lior Bracha, Eitan Shaar, Aviv Shamsian, Ethan Fetaya, Gal Chechik

Our results highlight the potential of using pre-trained visual-semantic models for generating high-quality contextual descriptions.

Referring Expression Referring expression generation

Cannot find the paper you are looking for? You can Submit a new open access paper.