Search Results for author: Michael Gharbi

Found 7 papers, 4 papers with code

A Dataset of Multi-Illumination Images in the Wild

no code implementations ICCV 2019 Lukas Murmann, Michael Gharbi, Miika Aittala, Fredo Durand

Collections of images under a single, uncontrolled illumination have enabled the rapid advancement of core computer vision tasks like classification, detection, and segmentation.

Image Relighting

Im2Vec: Synthesizing Vector Graphics without Vector Supervision

1 code implementation CVPR 2021 Pradyumna Reddy, Michael Gharbi, Michal Lukac, Niloy J. Mitra

The current alternative is to use specialized models that require explicit supervision on the vector graphics representation at training time.

Vector Graphics

MarioNette: Self-Supervised Sprite Learning

1 code implementation NeurIPS 2021 Dmitriy Smirnov, Michael Gharbi, Matthew Fisher, Vitor Guizilini, Alexei A. Efros, Justin Solomon

Artists and video game designers often construct 2D animations using libraries of sprites -- textured patches of objects and characters.

Any-resolution Training for High-resolution Image Synthesis

1 code implementation14 Apr 2022 Lucy Chai, Michael Gharbi, Eli Shechtman, Phillip Isola, Richard Zhang

To take advantage of varied-size data, we introduce continuous-scale training, a process that samples patches at random scales to train a new generator with variable output resolutions.

2k Image Generation +1

VecFusion: Vector Font Generation with Diffusion

no code implementations16 Dec 2023 Vikas Thamizharasan, Difan Liu, Shantanu Agarwal, Matthew Fisher, Michael Gharbi, Oliver Wang, Alec Jacobson, Evangelos Kalogerakis

We present VecFusion, a new neural architecture that can generate vector fonts with varying topological structures and precise control point positions.

Font Generation Vector Graphics

Magic Fixup: Streamlining Photo Editing by Watching Dynamic Videos

no code implementations19 Mar 2024 Hadi AlZayer, Zhihao Xia, Xuaner Zhang, Eli Shechtman, Jia-Bin Huang, Michael Gharbi

We show that by using simple segmentations and coarse 2D manipulations, we can synthesize a photorealistic edit faithful to the user's input while addressing second-order effects like harmonizing the lighting and physical interactions between edited objects.

Cannot find the paper you are looking for? You can Submit a new open access paper.