Search Results for author: Sherwin Bahmani

Found 7 papers, 3 papers with code

4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling

no code implementations29 Nov 2023 Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, David B. Lindell

Recent breakthroughs in text-to-4D generation rely on pre-trained text-to-image and text-to-video models to generate dynamic 3D scenes.

CC3D: Layout-Conditioned Generation of Compositional 3D Scenes

no code implementations ICCV 2023 Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Xingguang Yan, Gordon Wetzstein, Leonidas Guibas, Andrea Tagliasacchi

In this work, we introduce CC3D, a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts, trained using single-view images.

Inductive Bias

3D-Aware Video Generation

1 code implementation29 Jun 2022 Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc van Gool, Radu Timofte

Generative models have emerged as an essential building block for many image synthesis and editing tasks.

Image Generation Video Generation

Adaptive Generalization for Semantic Segmentation

no code implementations29 Sep 2021 Sherwin Bahmani, Oliver Hahn, Eduard Sebastian Zamfir, Nikita Araslanov, Stefan Roth

In this work, we empirically study an adaptive inference strategy for semantic segmentation that adjusts the model to the test sample before producing the final prediction.

Segmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.