Search Results for author: Yuan-Chen Guo

Found 21 papers, 4 papers with code

Sketch2Model: View-Aware 3D Modeling from Single Free-Hand Sketches

1 code implementation CVPR 2021 Song-Hai Zhang, Yuan-Chen Guo, Qing-Wen Gu

We investigate the problem of generating 3D meshes from single free-hand sketches, aiming at fast 3D modeling for novice users.

Learning Implicit Glyph Shape Representation

no code implementations16 Jun 2021 Ying-Tian Liu, Yuan-Chen Guo, Yi-Xiao Li, Chen Wang, Song-Hai Zhang

In this paper, we present a novel implicit glyph shape representation, which models glyphs as shape primitives enclosed by quadratic curves, and naturally enables generating glyph images at arbitrary high resolutions.

Font Style Transfer Vector Graphics

Deep Image Synthesis from Intuitive User Input: A Review and Perspectives

no code implementations9 Jul 2021 Yuan Xue, Yuan-Chen Guo, Han Zhang, Tao Xu, Song-Hai Zhang, Xiaolei Huang

In many applications of computer graphics, art and design, it is desirable for a user to provide intuitive non-image input, such as text, sketch, stroke, graph or layout, and have a computer system automatically generate photo-realistic images that adhere to the input content.

Image Generation Image Retrieval +1

NeRFReN: Neural Radiance Fields with Reflections

no code implementations CVPR 2022 Yuan-Chen Guo, Di Kang, Linchao Bao, Yu He, Song-Hai Zhang

Specifically, we propose to split a scene into transmitted and reflected components, and model the two components with separate neural radiance fields.

Depth Estimation Novel View Synthesis

NeRF-SR: High-Quality Neural Radiance Fields using Supersampling

1 code implementation3 Dec 2021 Chen Wang, Xian Wu, Yuan-Chen Guo, Song-Hai Zhang, Yu-Wing Tai, Shi-Min Hu

We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.

Novel View Synthesis Vocal Bursts Intensity Prediction

Gradient-based Point Cloud Denoising with Uniformity

no code implementations21 Jul 2022 Tian-Xing Xu, Yuan-Chen Guo, Yong-Liang Yang, Song-Hai Zhang

Point clouds captured by depth sensors are often contaminated by noises, obstructing further analysis and applications.

Denoising Surface Reconstruction

Joint Implicit Neural Representation for High-fidelity and Compact Vector Fonts

no code implementations ICCV 2023 Chia-Hao Chen, Ying-Tian Liu, Zhifei Zhang, Yuan-Chen Guo, Song-Hai Zhang

Existing vector font generation approaches either struggle to preserve high-frequency corner details of the glyph or produce vector shapes that have redundant segments, which hinders their applications in practical scenarios.

Font Generation

MBPTrack: Improving 3D Point Cloud Tracking with Memory Networks and Box Priors

no code implementations ICCV 2023 Tian-Xing Xu, Yuan-Chen Guo, Yu-Kun Lai, Song-Hai Zhang

To address these issues, we present MBPTrack, which adopts a Memory mechanism to utilize past information and formulates localization in a coarse-to-fine scheme using Box Priors given in the first frame.

3D Single Object Tracking Autonomous Driving +1

VMesh: Hybrid Volume-Mesh Representation for Efficient View Synthesis

no code implementations28 Mar 2023 Yuan-Chen Guo, Yan-Pei Cao, Chen Wang, Yu He, Ying Shan, XiaoHu Qie, Song-Hai Zhang

With the emergence of neural radiance fields (NeRFs), view synthesis quality has reached an unprecedented level.

PanoGRF: Generalizable Spherical Radiance Fields for Wide-baseline Panoramas

no code implementations NeurIPS 2023 Zheng Chen, Yan-Pei Cao, Yuan-Chen Guo, Chen Wang, Ying Shan, Song-Hai Zhang

Unlike generalizable radiance fields trained on perspective images, PanoGRF avoids the information loss from panorama-to-perspective conversion and directly aggregates geometry and appearance features of 3D sample points from each panoramic view based on spherical projection.

Depth Estimation

Wonder3D: Single Image to 3D using Cross-Domain Diffusion

no code implementations23 Oct 2023 Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, YuAn Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, Wenping Wang

In this work, we introduce Wonder3D, a novel method for efficiently generating high-fidelity textured meshes from single-view images. Recent methods based on Score Distillation Sampling (SDS) have shown the potential to recover 3D geometry from 2D diffusion priors, but they typically suffer from time-consuming per-shape optimization and inconsistent geometry.

Image to 3D

Text-to-3D with Classifier Score Distillation

no code implementations30 Oct 2023 Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, Xiaojuan Qi

In this paper, we re-evaluate the role of classifier-free guidance in score distillation and discover a surprising finding: the guidance alone is enough for effective text-to-3D generation tasks.

Text to 3D Texture Synthesis

PI3D: Efficient Text-to-3D Generation with Pseudo-Image Diffusion

no code implementations14 Dec 2023 Ying-Tian Liu, Guan Luo, Heyi Sun, Wei Yin, Yuan-Chen Guo, Song-Hai Zhang

In this paper, we introduce PI3D, a novel and efficient framework that utilizes the pre-trained text-to-image diffusion models to generate high-quality 3D shapes in minutes.

Text to 3D

Cannot find the paper you are looking for? You can Submit a new open access paper.