1 code implementation • 29 Mar 2025 • Yunsong Wang, Tianxin Huang, Hanlin Chen, Gim Hee Lee
After the feed-forward reconstruction of 3DGS primitives, we investigate a depth-regularized per-scene fine-tuning process.
no code implementations • 1 Dec 2024 • Hanlin Chen, Fangyin Wei, Gim Hee Lee
Extensive experimental results demonstrate that ChatSplat supports multi-level interactions -- object, view, and scene -- within 3D space, enhancing both understanding and engagement.
1 code implementation • 10 Jun 2024 • Jinnan Chen, Chen Li, Jianfeng Zhang, Lingting Zhu, Buzhen Huang, Hanlin Chen, Gim Hee Lee
To mitigate the potential generation of unrealistic human poses and shapes, we incorporate human priors from the SMPL-X model as a dual branch, propagating image features from the SMPL-X volume to the image Gaussians using sparse convolution and attention mechanisms.
no code implementations • 9 Jun 2024 • Hanlin Chen, Fangyin Wei, Chen Li, Tianxin Huang, Yunsong Wang, Gim Hee Lee
Although 3D Gaussian Splatting has been widely studied because of its realistic and efficient novel-view synthesis, it is still challenging to extract a high-quality surface from the point-based representation.
1 code implementation • 28 May 2024 • Yunsong Wang, Tianxin Huang, Hanlin Chen, Gim Hee Lee
However, existing generalizable 3D Gaussian Splatting methods are largely confined to narrow-range interpolation between stereo images due to their heavy backbones, thus lacking the ability to accurately localize 3D Gaussian and support free-view synthesis across wide view range.
1 code implementation • CVPR 2024 • Yunsong Wang, Hanlin Chen, Gim Hee Lee
Recent advancements in vision-language foundation models have significantly enhanced open-vocabulary 3D scene understanding.
Open Vocabulary Semantic Segmentation
Open-Vocabulary Semantic Segmentation
+2
no code implementations • 1 Dec 2023 • Hanlin Chen, Chen Li, Gim Hee Lee
In this work, we propose a neural implicit surface reconstruction pipeline with guidance from 3D Gaussian Splatting to recover highly detailed surfaces.
no code implementations • 22 Jan 2023 • Hanlin Chen, Renyuan Luo, Yiheng Feng
Navigating CAVs in such areas heavily relies on how the vehicle defines drivable areas based on perception information.
no code implementations • CVPR 2023 • Simin Chen, Hanlin Chen, Mirazul Haque, Cong Liu, Wei Yang
Recent advancements in deploying deep neural networks (DNNs) on resource-constrained devices have generated interest in input-adaptive dynamic neural networks (DyNNs).
1 code implementation • 21 Dec 2022 • Mengqi Guo, Chen Li, Hanlin Chen, Gim Hee Lee
In view of this, we explore the task of incremental learning for NIRs in this work.
no code implementations • 29 Sep 2021 • Hanlin Chen, Ming Lin, Xiuyu Sun, Hao Li
Based on these new discoveries, we propose i) a novel hybrid zero-shot proxy which outperforms existing ones by a large margin and is transferable among popular search spaces; ii) a new index for better measuring the true performance of ZS-NAS proxies in constrained NAS.
no code implementations • 8 Sep 2020 • Hanlin Chen, Li'an Zhuo, Baochang Zhang, Xiawu Zheng, Jianzhuang Liu, Rongrong Ji, David Doermann, Guodong Guo
In this paper, binarized neural architecture search (BNAS), with a search space of binarized convolutions, is introduced to produce extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
no code implementations • ECCV 2020 • Hanlin Chen, Baochang Zhang, Song Xue, Xuan Gong, Hong Liu, Rongrong Ji, David Doermann
Deep convolutional neural networks (DCNNs) have dominated as the best performers in machine learning, but can be challenged by adversarial attacks.
no code implementations • CVPR 2020 • Li'an Zhuo, Baochang Zhang, Linlin Yang, Hanlin Chen, Qixiang Ye, David Doermann, Guodong Guo, Rongrong Ji
Conventional learning methods simplify the bilinear model by regarding two intrinsically coupled factors independently, which degrades the optimization procedure.
no code implementations • 30 Apr 2020 • Li'an Zhuo, Baochang Zhang, Hanlin Chen, Linlin Yang, Chen Chen, Yanjun Zhu, David Doermann
To this end, a Child-Parent (CP) model is introduced to a differentiable NAS to search the binarized architecture (Child) under the supervision of a full-precision model (Parent).
no code implementations • 25 Nov 2019 • Hanlin Chen, Li'an Zhuo, Baochang Zhang, Xiawu Zheng, Jianzhuang Liu, David Doermann, Rongrong Ji
A variant, binarized neural architecture search (BNAS), with a search space of binarized convolutions, can produce extremely compressed models.