Image Manipulation
157 papers with code • 1 benchmarks • 4 datasets
Libraries
Use these libraries to find Image Manipulation models and implementationsMost implemented papers
Kornia: an Open Source Differentiable Computer Vision Library for PyTorch
This work presents Kornia -- an open source computer vision library which consists of a set of differentiable routines and modules to solve generic computer vision problems.
Swapping Autoencoder for Deep Image Manipulation
Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging.
Learning Accurate Dense Correspondences and When to Trust Them
Establishing dense correspondences between a pair of images is an important and general problem.
Point-to-Point Video Generation
We introduce point-to-point video generation that controls the generation process with two control points: the targeted start- and end-frames.
ManTra-Net: Manipulation Tracing Network for Detection and Localization of Image Forgeries With Anomalous Features
To fight against real-life image forgery, which commonly involves different types and combined manipulations, we propose a unified deep neural architecture called ManTra-Net.
ManiGAN: Text-Guided Image Manipulation
The goal of our paper is to semantically edit parts of an image matching a given text that describes desired attributes (e. g., texture, colour, and background), while preserving other contents that are irrelevant to the text.
StyleGAN2 Distillation for Feed-forward Image Manipulation
Editing existing images requires embedding a given image into the latent space of StyleGAN2.
Conditional Image Generation and Manipulation for User-Specified Content
This can be done by conditioning the model on additional information.
Pivotal Tuning for Latent-based Editing of Real Images
The key idea is pivotal tuning - a brief training process that preserves the editing quality of an in-domain latent region, while changing its portrayed identity and appearance.
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators
Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image?