Search Results for author: Kfir Aberman

Found 15 papers, 7 papers with code

Prompt-to-Prompt Image Editing with Cross Attention Control

no code implementations2 Aug 2022 Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, Daniel Cohen-Or

Editing is challenging for these generative models, since an innate property of an editing technique is to preserve most of the original image, while in the text-based models, even a small modification of the text prompt often leads to a completely different outcome.

Image Generation

MoDi: Unconditional Motion Synthesis from Diverse Data

no code implementations16 Jun 2022 Sigal Raab, Inbal Leibovitch, Peizhuo Li, Kfir Aberman, Olga Sorkine-Hornung, Daniel Cohen-Or

Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset and yields a well-behaved, highly semantic latent space.

Motion Interpolation Motion Synthesis

GANimator: Neural Motion Synthesis from a Single Sequence

1 code implementation5 May 2022 Peizhuo Li, Kfir Aberman, Zihan Zhang, Rana Hanocka, Olga Sorkine-Hornung

We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence.

Motion Synthesis Style Transfer

MyStyle: A Personalized Generative Prior

no code implementations31 Mar 2022 Yotam Nitzan, Kfir Aberman, Qiurui He, Orly Liba, Michal Yarom, Yossi Gandelsman, Inbar Mosseri, Yael Pritch, Daniel Cohen-Or

Given a small reference set of portrait images of a person (~100), we tune the weights of a pretrained StyleGAN face generator to form a local, low-dimensional, personalized manifold in the latent space.

Image Enhancement Super-Resolution

Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure

no code implementations23 Nov 2021 Andreas Aristidou, Anastasios Yiannakidis, Kfir Aberman, Daniel Cohen-Or, Ariel Shamir, Yiorgos Chrysanthou

In this work, we present a music-driven motion synthesis framework that generates long-term sequences of human motions which are synchronized with the input beats, and jointly form a global structure that respects a specific dance genre.

Motion Synthesis

Deep Saliency Prior for Reducing Visual Distraction

no code implementations CVPR 2022 Kfir Aberman, Junfeng He, Yossi Gandelsman, Inbar Mosseri, David E. Jacobs, Kai Kohlhoff, Yael Pritch, Michael Rubinstein

Using only a model that was trained to predict where people look at images, and no additional training data, we can produce a range of powerful editing effects for reducing distraction in images.

Learning Skeletal Articulations with Neural Blend Shapes

1 code implementation6 May 2021 Peizhuo Li, Kfir Aberman, Rana Hanocka, Libin Liu, Olga Sorkine-Hornung, Baoquan Chen

Furthermore, we propose neural blend shapes--a set of corrective pose-dependent shapes which improve the deformation quality in the joint regions in order to address the notorious artifacts resulting from standard rigging and skinning.

Zoom-to-Inpaint: Image Inpainting with High-Frequency Details

1 code implementation17 Dec 2020 Soo Ye Kim, Kfir Aberman, Nori Kanazawa, Rahul Garg, Neal Wadhwa, Huiwen Chang, Nikhil Karnad, Munchurl Kim, Orly Liba

Although deep learning has enabled a huge leap forward in image inpainting, current methods are often unable to synthesize realistic high-frequency details.

Image Inpainting Super-Resolution

Neural Alignment for Face De-pixelization

no code implementations29 Sep 2020 Maayan Shuvi, Noa Fish, Kfir Aberman, Ariel Shamir, Daniel Cohen-Or

Although simple, our framework synthesizes high-quality face reconstructions, demonstrating that given the statistical prior of a human face, multiple aligned pixelated frames contain sufficient information to reconstruct a high-quality approximation of the original signal.

MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency

no code implementations22 Jun 2020 Mingyi Shi, Kfir Aberman, Andreas Aristidou, Taku Komura, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen

We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video. While previous methods rely on either rigging or inverse kinematics (IK) to associate a consistent skeleton with temporally coherent joint rotations, our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation.

Skeleton-Aware Networks for Deep Motion Retargeting

1 code implementation12 May 2020 Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine-Hornung, Daniel Cohen-Or, Baoquan Chen

In other words, our operators form the building blocks of a new deep motion processing framework that embeds the motion into a common latent space, shared by a collection of homeomorphic skeletons.

motion retargeting Motion Synthesis

Unpaired Motion Style Transfer from Video to Animation

1 code implementation12 May 2020 Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen

In this paper, we present a novel data-driven framework for motion style transfer, which learns from an unpaired collection of motions with style labels, and enables transferring motion styles not observed during training.

3D Reconstruction Motion Style Transfer +1

Learning Character-Agnostic Motion for Motion Retargeting in 2D

2 code implementations5 May 2019 Kfir Aberman, Rundi Wu, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or

In order to achieve our goal, we learn to extract, directly from a video, a high-level latent motion representation, which is invariant to the skeleton geometry and the camera view.

3D Reconstruction motion retargeting +1

Deep Video-Based Performance Cloning

no code implementations21 Aug 2018 Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or

After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances.

Neural Best-Buddies: Sparse Cross-Domain Correspondence

2 code implementations10 May 2018 Kfir Aberman, Jing Liao, Mingyi Shi, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or

Correspondence between images is a fundamental problem in computer vision, with a variety of graphics applications.

Image Morphing

Cannot find the paper you are looking for? You can Submit a new open access paper.