Search Results for author: Yawar Siddiqui

Found 12 papers, 6 papers with code

Meta 3D AssetGen: Text-to-Mesh Generation with High-Quality Geometry, Texture, and PBR Materials

no code implementations2 Jul 2024 Yawar Siddiqui, Tom Monnier, Filippos Kokkinos, Mahendra Kariya, Yanir Kleiman, Emilien Garreau, Oran Gafni, Natalia Neverova, Andrea Vedaldi, Roman Shapovalov, David Novotny

We present Meta 3D AssetGen (AssetGen), a significant advancement in text-to-3D generation which produces faithful, high-quality meshes with texture and material control.

3D Generation Text to 3D

PolyDiff: Generating 3D Polygonal Meshes with Diffusion Models

no code implementations18 Dec 2023 Antonio Alliegro, Yawar Siddiqui, Tatiana Tommasi, Matthias Nießner

In contrast to methods that use alternate 3D shape representations (e. g. implicit representations), our approach is a discrete denoising diffusion probabilistic model that operates natively on the polygonal mesh data structure.

Avg Denoising

MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers

2 code implementations CVPR 2024 Yawar Siddiqui, Antonio Alliegro, Alexey Artemov, Tatiana Tommasi, Daniele Sirigatti, Vladislav Rosov, Angela Dai, Matthias Nießner

We introduce MeshGPT, a new approach for generating triangle meshes that reflects the compactness typical of artist-created meshes, in contrast to dense triangle meshes extracted by iso-surfacing methods from neural fields.

Decoder

DiffRF: Rendering-Guided 3D Radiance Field Diffusion

no code implementations CVPR 2023 Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner

We introduce DiffRF, a novel approach for 3D radiance field synthesis based on denoising diffusion probabilistic models.

Denoising

Texturify: Generating Textures on 3D Shape Surfaces

no code implementations5 Apr 2022 Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai

Texture cues on 3D objects are key to compelling visual representations, with the possibility to create high visual fidelity with inherent spatial consistency across different views.

RetrievalFuse: Neural 3D Scene Reconstruction with a Database

1 code implementation ICCV 2021 Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai

3D reconstruction of large scenes is a challenging problem due to the high-complexity nature of the solution space, in particular for generative neural networks.

3D Reconstruction 3D Scene Reconstruction +3

SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans

1 code implementation CVPR 2021 Angela Dai, Yawar Siddiqui, Justus Thies, Julien Valentin, Matthias Nießner

We present SPSG, a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations by learning to infer unobserved scene geometry and color in a self-supervised fashion.

3D geometry 3D Reconstruction +1

ViewAL: Active Learning with Viewpoint Entropy for Semantic Segmentation

1 code implementation CVPR 2020 Yawar Siddiqui, Julien Valentin, Matthias Nießner

To incorporate this uncertainty measure, we introduce a new viewpoint entropy formulation, which is the basis of our active learning strategy.

Active Learning Semantic Segmentation +1

Clustering with Deep Learning: Taxonomy and New Methods

2 code implementations23 Jan 2018 Elie Aljalbout, Vladimir Golkov, Yawar Siddiqui, Maximilian Strobel, Daniel Cremers

In this paper, we propose a systematic taxonomy of clustering methods that utilize deep neural networks.

Clustering Deep Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.