no code implementations • 20 Jun 2024 • Rotem Shalev-Arkushin, Aharon Azulay, Tavi Halperin, Eitan Richardson, Amit H. Bermano, Ohad Fried
We show that despite data imperfection, by learning from our generated data and leveraging the prior of pretrained diffusion models, our model is able to perform the desired edit consistently while preserving the original video content.
no code implementations • 7 Dec 2023 • Nir Zabari, Aharon Azulay, Alexey Gorkor, Tavi Halperin, Ohad Fried
To tackle these issues, we present a novel image colorization framework that utilizes image diffusion techniques with granular text prompts.
no code implementations • 17 Oct 2021 • Aharon Azulay, Tavi Halperin, Orestis Vantzos, Nadav Borenstein, Ofir Bibi
Temporally consistent dense video annotations are scarce and hard to collect.
no code implementations • 19 May 2021 • Tavi Halperin, Hanit Hakim, Orestis Vantzos, Gershon Hochman, Netai Benaim, Lior Sassy, Michael Kupchik, Ofir Bibi, Ohad Fried
We present an algorithm for producing a seamless animated loop from a single image.
no code implementations • 6 Mar 2019 • Tavi Halperin, Harel Cain, Ofir Bibi, Michael Werman
Digital videos such as those captured by a smartphone often exhibit exposure inconsistencies, a poorly exposed sky, or simply suffer from an uninteresting or plain looking sky.
1 code implementation • ICML'19 2018 • Tavi Halperin, Ariel Ephrat, Yedid Hoshen
In this work, we introduce a new method---Neural Egg Separation---to tackle the scenario of extracting a signal from an unobserved distribution additively mixed with a signal from an observed distribution.
1 code implementation • ICLR 2019 • Tavi Halperin, Ariel Ephrat, Yedid Hoshen
In this work, we introduce a new method---Neural Egg Separation---to tackle the scenario of extracting a signal from an unobserved distribution additively mixed with a signal from an observed distribution.
1 code implementation • 19 Aug 2018 • Tavi Halperin, Ariel Ephrat, Shmuel Peleg
This alignment is based on deep audio-visual features, mapping the lips video and the speech signal to a shared representation.
no code implementations • 22 Aug 2017 • Aviv Gabbay, Ariel Ephrat, Tavi Halperin, Shmuel Peleg
Isolating the voice of a specific person while filtering out other voices or background noises is challenging when video is shot in noisy environments.
no code implementations • 1 Aug 2017 • Ariel Ephrat, Tavi Halperin, Shmuel Peleg
Speechreading is the task of inferring phonetic information from visually observed articulatory facial movements, and is a notoriously difficult task for humans to perform.
no code implementations • 28 Mar 2017 • Tavi Halperin, Michael Werman
In this paper we propose a new method to compute the epipolar geometry from a video stream, by exploiting the following observation: For a pixel p in Image A, all pixels corresponding to p in Image B are on the same epipolar line.
no code implementations • 26 Apr 2016 • Tavi Halperin, Yair Poleg, Chetan Arora, Shmuel Peleg
However, this accentuates the shake caused by natural head motion in an egocentric video, making the fast forwarded video useless.
no code implementations • 17 Apr 2016 • Gil Ben-Artzi, Tavi Halperin, Michael Werman, Shmuel Peleg
This paper proposes a similarity measure between lines that indicates whether two lines are corresponding epipolar lines and enables finding epipolar line correspondences as needed for the computation of epipolar geometry.
no code implementations • CVPR 2015 • Yair Poleg, Tavi Halperin, Chetan Arora, Shmuel Peleg
While egocentric cameras like GoPro are gaining popularity, the videos they capture are long, boring, and difficult to watch from start to end.