Search Results for author: Kwan-Yee Lin

Found 22 papers, 14 papers with code

Urban Architect: Steerable 3D Urban Scene Generation with Layout Prior

no code implementations10 Apr 2024 Fan Lu, Kwan-Yee Lin, Yan Xu, Hongsheng Li, Guang Chen, Changjun Jiang

(2) To handle the unbounded nature of urban scenes, we represent 3D scene with a Scalable Hash Grid structure, incrementally adapting to the growing scale of urban scenes.

3D Generation Model Optimization +2

CosmicMan: A Text-to-Image Foundation Model for Humans

no code implementations1 Apr 2024 Shikai Li, Jianglin Fu, Kaiyuan Liu, Wentao Wang, Kwan-Yee Lin, Wayne Wu

We present CosmicMan, a text-to-image foundation model specialized for generating high-fidelity human images.

Parameterization-driven Neural Implicit Surfaces Editing

no code implementations9 Oct 2023 Baixin Xu, Jiangbei Hu, Fei Hou, Kwan-Yee Lin, Wayne Wu, Chen Qian, Ying He

In this paper, we present a novel neural algorithm to parameterize neural implicit surfaces to simple parametric domains, such as spheres, cubes, or polycubes, thereby facilitating visualization and various editing tasks.

Neural Rendering

Urban Radiance Field Representation with Deformable Neural Mesh Primitives

1 code implementation ICCV 2023 Fan Lu, Yan Xu, Guang Chen, Hongsheng Li, Kwan-Yee Lin, Changjun Jiang

To construct urban-level radiance fields efficiently, we design Deformable Neural Mesh Primitive~(DNMP), and propose to parameterize the entire scene with such primitives.

Image Generation Novel View Synthesis

RenderMe-360: A Large Digital Asset Library and Benchmarks Towards High-fidelity Head Avatars

1 code implementation NeurIPS 2023 Dongwei Pan, Long Zhuo, Jingtan Piao, Huiwen Luo, Wei Cheng, Yuxin Wang, Siming Fan, Shengqi Liu, Lei Yang, Bo Dai, Ziwei Liu, Chen Change Loy, Chen Qian, Wayne Wu, Dahua Lin, Kwan-Yee Lin

It is a large-scale digital library for head avatars with three key attributes: 1) High Fidelity: all subjects are captured by 60 synchronized, high-resolution 2K cameras in 360 degrees.

2k Image Matting +2

MonoHuman: Animatable Human Neural Field from Monocular Video

1 code implementation CVPR 2023 Zhengming Yu, Wei Cheng, Xian Liu, Wayne Wu, Kwan-Yee Lin

Recent works propose to graft a deformation network into the NeRF to further model the dynamics of the human neural field for animating vivid human motions.

Deformable Model-Driven Neural Rendering for High-Fidelity 3D Reconstruction of Human Heads Under Low-View Settings

2 code implementations ICCV 2023 Baixin Xu, Jiarui Zhang, Kwan-Yee Lin, Chen Qian, Ying He

To address this, we propose geometry decomposition and adopt a two-stage, coarse-to-fine training strategy, allowing for progressively capturing high-frequency geometric details.

3D Reconstruction Neural Rendering +1

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis

1 code implementation25 Apr 2022 Wei Cheng, Su Xu, Jingtan Piao, Chen Qian, Wayne Wu, Kwan-Yee Lin, Hongsheng Li

Specifically, we compress the light fields for novel view human rendering as conditional implicit neural radiance fields from both geometry and appearance aspects.

Novel View Synthesis

StyleGAN-Human: A Data-Centric Odyssey of Human Generation

4 code implementations25 Apr 2022 Jianglin Fu, Shikai Li, Yuming Jiang, Kwan-Yee Lin, Chen Qian, Chen Change Loy, Wayne Wu, Ziwei Liu

In addition, a model zoo and human editing applications are demonstrated to facilitate future research in the community.

Image Generation

Learning a Structured Latent Space for Unsupervised Point Cloud Completion

no code implementations CVPR 2022 Yingjie Cai, Kwan-Yee Lin, Chao Zhang, Qiang Wang, Xiaogang Wang, Hongsheng Li

Specifically, we map a series of related partial point clouds into multiple complete shape and occlusion code pairs and fuse the codes to obtain their representations in the unified latent space.

Point Cloud Completion

Semantic Scene Completion via Integrating Instances and Scene in-the-Loop

1 code implementation CVPR 2021 Yingjie Cai, Xuesong Chen, Chao Zhang, Kwan-Yee Lin, Xiaogang Wang, Hongsheng Li

The key insight is that we decouple the instances from a coarsely completed semantic scene instead of a raw input image to guide the reconstruction of instances and the overall scene.

3D Semantic Scene Completion Scene Understanding

SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks

no code implementations19 Oct 2020 Yan Xu, Zhaoyang Huang, Kwan-Yee Lin, Xinge Zhu, Jianping Shi, Hujun Bao, Guofeng Zhang, Hongsheng Li

To suit our network to self-supervised learning, we design several novel loss functions that utilize the inherent properties of LiDAR point clouds.

Self-Supervised Learning

TRB: A Novel Triplet Representation for Understanding 2D Human Body

2 code implementations ICCV 2019 Haodong Duan, Kwan-Yee Lin, Sheng Jin, Wentao Liu, Chen Qian, Wanli Ouyang

In this paper, we propose the Triplet Representation for Body (TRB) -- a compact 2D human body representation, with skeleton keypoints capturing human pose information and contour keypoints containing human shape information.

Conditional Image Generation Open-Ended Question Answering

Make a Face: Towards Arbitrary High Fidelity Face Manipulation

no code implementations ICCV 2019 Shengju Qian, Kwan-Yee Lin, Wayne Wu, Yangxiaokang Liu, Quan Wang, Fumin Shen, Chen Qian, Ran He

Recent studies have shown remarkable success in face manipulation task with the advance of GANs and VAEs paradigms, but the outputs are sometimes limited to low-resolution and lack of diversity.

Clustering Disentanglement +1

Weakly-Supervised Discovery of Geometry-Aware Representation for 3D Human Pose Estimation

no code implementations CVPR 2019 Xipeng Chen, Kwan-Yee Lin, Wentao Liu, Chen Qian, Xiaogang Wang, Liang Lin

Recent studies have shown remarkable advances in 3D human pose estimation from monocular images, with the help of large-scale in-door 3D datasets and sophisticated network architectures.

3D Human Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.