Search Results for author: Pengsheng Guo

Found 7 papers, 1 papers with code

StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D

no code implementations2 Dec 2023 Pengsheng Guo, Hans Hao, Adam Caccavale, Zhongzheng Ren, Edward Zhang, Qi Shan, Aditya Sankar, Alexander G. Schwing, Alex Colburn, Fangchang Ma

Our analysis identifies the core of these challenges as the interaction among noise levels in the 2D diffusion process, the architecture of the diffusion network, and the 3D model representation.

Text to 3D Transparent objects

CVRecon: Rethinking 3D Geometric Feature Learning For Neural Reconstruction

no code implementations ICCV 2023 Ziyue Feng, Liang Yang, Pengsheng Guo, Bing Li

Recent advances in neural reconstruction using posed image sequences have made remarkable progress.

GAUDI: A Neural Architect for Immersive 3D Scene Generation

1 code implementation27 Jul 2022 Miguel Angel Bautista, Pengsheng Guo, Samira Abnar, Walter Talbott, Alexander Toshev, Zhuoyuan Chen, Laurent Dinh, Shuangfei Zhai, Hanlin Goh, Daniel Ulbricht, Afshin Dehghan, Josh Susskind

We introduce GAUDI, a generative model capable of capturing the distribution of complex and realistic 3D scenes that can be rendered immersively from a moving camera.

Image Generation Scene Generation

Fast and Explicit Neural View Synthesis

no code implementations12 Jul 2021 Pengsheng Guo, Miguel Angel Bautista, Alex Colburn, Liang Yang, Daniel Ulbricht, Joshua M. Susskind, Qi Shan

We study the problem of novel view synthesis from sparse source observations of a scene comprised of 3D objects.

Novel View Synthesis

MetricOpt: Learning to Optimize Black-Box Evaluation Metrics

no code implementations CVPR 2021 Chen Huang, Shuangfei Zhai, Pengsheng Guo, Josh Susskind

This leads to consistent improvements since the value function provides effective metric supervision during finetuning, and helps to correct the potential bias of loss-only supervision.

Image Classification Image Retrieval +3

Learning to Branch for Multi-Task Learning

no code implementations ICML 2020 Pengsheng Guo, Chen-Yu Lee, Daniel Ulbricht

Training multiple tasks jointly in one deep network yields reduced latency during inference and better performance over the single-task counterpart by sharing certain layers of a network.

Multi-Task Learning

Adaptive Variance for Changing Sparse-Reward Environments

no code implementations15 Mar 2019 Xingyu Lin, Pengsheng Guo, Carlos Florensa, David Held

Robots that are trained to perform a task in a fixed environment often fail when facing unexpected changes to the environment due to a lack of exploration.

Cannot find the paper you are looking for? You can Submit a new open access paper.