Search Results for author: Gyeongjin Kang

Found 3 papers, 0 papers with code

Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction

no code implementations9 Dec 2024 Seungtae Nam, Xiangyu Sun, Gyeongjin Kang, Younggeun Lee, Seungjun Oh, Eunbyung Park

Generalized feed-forward Gaussian models have achieved significant progress in sparse-view 3D reconstruction by leveraging prior knowledge from large multi-view datasets.

3D Reconstruction

SelfSplat: Pose-Free and 3D Prior-Free Generalizable 3D Gaussian Splatting

no code implementations26 Nov 2024 Gyeongjin Kang, Jisang Yoo, Jihyeon Park, Seungtae Nam, Hyeonsoo Im, Sangheon Shin, Sangpil Kim, Eunbyung Park

Our model addresses these challenges by effectively integrating explicit 3D representations with self-supervised depth and pose estimation techniques, resulting in reciprocal improvements in both pose accuracy and 3D reconstruction quality.

3D Reconstruction Pose Estimation

CodecNeRF: Toward Fast Encoding and Decoding, Compact, and High-quality Novel-view Synthesis

no code implementations7 Apr 2024 Gyeongjin Kang, Younggeun Lee, Seungjun Oh, Eunbyung Park

Neural Radiance Fields (NeRF) have achieved huge success in effectively capturing and representing 3D objects and scenes.

Decoder NeRF +1

Cannot find the paper you are looking for? You can Submit a new open access paper.