no code implementations • 11 Feb 2025 • Ivan Lopes, Valentin Deschaintre, Yannick Hold-Geoffroy, Raoul de Charette
As a result, our method seamlessly integrates a desired material into the target location in the photograph while retaining the identity of the scene.
no code implementations • 4 Dec 2024 • Xiaohe Ma, Valentin Deschaintre, Miloš Hašan, Fujun Luan, Kun Zhou, Hongzhi Wu, Yiwei Hu
High-quality material generation is key for virtual environment authoring and inverse rendering.
no code implementations • 28 Nov 2024 • Michael Fischer, Iliyan Georgiev, Thibault Groueix, Vladimir G. Kim, Tobias Ritschel, Valentin Deschaintre
Our approach works on arbitrary 3D representations and outperforms several strong baselines in terms of selection accuracy and multiview consistency.
no code implementations • 25 Jun 2024 • Ruben Wiersma, Julien Philip, Miloš Hašan, Krishna Mullia, Fujun Luan, Elmar Eisemann, Valentin Deschaintre
Relightable object acquisition is a key challenge in simplifying digital asset creation.
no code implementations • 1 May 2024 • Julia Guerrero-Viu, Milos Hasan, Arthur Roullier, Midhun Harikumar, Yiwei Hu, Paul Guerrero, Diego Gutierrez, Belen Masia, Valentin Deschaintre
Generative models have enabled intuitive image creation and manipulation using natural language.
no code implementations • 1 May 2024 • Zheng Zeng, Valentin Deschaintre, Iliyan Georgiev, Yannick Hold-Geoffroy, Yiwei Hu, Fujun Luan, Ling-Qi Yan, Miloš Hašan
Our X$\rightarrow$RGB model explores a middle ground between traditional rendering and generative models: we can specify only certain appearance properties that should be followed, and give freedom to the model to hallucinate a plausible version of the rest.
no code implementations • 18 Apr 2024 • Xinyue Wei, Kai Zhang, Sai Bi, Hao Tan, Fujun Luan, Valentin Deschaintre, Kalyan Sunkavalli, Hao Su, Zexiang Xu
This allows for end-to-end mesh reconstruction by fine-tuning a pre-trained NeRF LRM with mesh rendering.
no code implementations • 3 Apr 2024 • Duygu Ceylan, Valentin Deschaintre, Thibault Groueix, Rosalie Martin, Chun-Hao Huang, Romain Rouffet, Vladimir Kim, Gaëtan Lassagne
We present MatAtlas, a method for consistent text-guided 3D model texturing.
no code implementations • CVPR 2024 • Giuseppe Vecchio, Valentin Deschaintre
We introduce MatSynth, a dataset of 4, 000+ CC0 ultra-high resolution PBR materials.
no code implementations • 4 Sep 2023 • Giuseppe Vecchio, Rosalie Martin, Arthur Roullier, Adrien Kaiser, Romain Rouffet, Valentin Deschaintre, Tamy Boubekeur
Our generative approach further permits exploration of a variety of materials which could correspond to the input image, mitigating the unknown lighting conditions.
no code implementations • 25 Jul 2023 • Valentin Deschaintre, Julia Guerrero-Viu, Diego Gutierrez, Tamy Boubekeur, Belen Masia
We introduce text2fabric, a novel dataset that links free-text descriptions to various fabric materials.
no code implementations • 6 Jul 2023 • Kai Yan, Fujun Luan, Miloš Hašan, Thibault Groueix, Valentin Deschaintre, Shuang Zhao
A 3D digital scene contains many components: lights, materials and geometries, interacting to reach the desired appearance.
no code implementations • 22 May 2023 • Prafull Sharma, Julien Philip, Michaël Gharbi, William T. Freeman, Fredo Durand, Valentin Deschaintre
We present a method capable of selecting the regions of a photograph exhibiting the same material as an artist-chosen area.
no code implementations • 20 May 2023 • Xilong Zhou, Miloš Hašan, Valentin Deschaintre, Paul Guerrero, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Nima Khademi Kalantari
Instead, we train a generator for a neural material representation that is rendered with a learned relighting module to create arbitrarily lit RGB images; these are compared against real photos using a discriminator.
1 code implementation • 4 May 2023 • Julien Philip, Valentin Deschaintre
NeRF acquisition typically requires careful choice of near planes for the different cameras or suffers from background collapse, creating floating artifacts on the edges of the captured scene.
no code implementations • 12 Jun 2022 • Xilong Zhou, Miloš Hašan, Valentin Deschaintre, Paul Guerrero, Kalyan Sunkavalli, Nima Kalantari
The resulting materials are tileable, can be larger than the target image, and are editable by varying the condition.
no code implementations • 23 Feb 2022 • Shouchang Guo, Valentin Deschaintre, Douglas Noll, Arthur Roullier
We present a novel U-Attention vision Transformer for universal texture synthesis.
no code implementations • CVPR 2021 • Valentin Deschaintre, Yiming Lin, Abhijeet Ghosh
We present a novel method for efficient acquisition of shape and spatially varying reflectance of 3D objects using polarization cues.
no code implementations • 23 Feb 2021 • Philipp Henzler, Valentin Deschaintre, Niloy J. Mitra, Tobias Ritschel
We learn a latent space for easy capture, consistent interpolation, and efficient reproduction of visual material appearance.
1 code implementation • 6 Jul 2020 • Valentin Deschaintre, George Drettakis, Adrien Bousseau
Our solution is extremely simple: we fine-tune a deep appearance-capture network on the provided exemplars, such that it learns to extract similar SVBRDF values from the target image.
1 code implementation • 27 Jun 2019 • Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, Adrien Bousseau
Empowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph.
Graphics I.3
1 code implementation • 23 Oct 2018 • Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, Adrien Bousseau
Texture, highlights, and shading are some of many visual cues that allow humans to perceive material appearance in single pictures.
Graphics I.3