no code implementations • CVPR 2024 • Jun-Kun Chen, Samuel Rota Bulò, Norman Müller, Lorenzo Porzi, Peter Kontschieder, Yu-Xiong Wang
This paper proposes ConsistDreamer - a novel framework that lifts 2D diffusion models with 3D awareness and 3D consistency, thus enabling high-fidelity instruction-guided scene editing.
no code implementations • CVPR 2024 • Linzhan Mou, Jun-Kun Chen, Yu-Xiong Wang
This paper proposes Instruct 4D-to-4D that achieves 4D awareness and spatial-temporal consistency for 2D diffusion models to generate high-quality instruction-guided dynamic scene editing results.
no code implementations • CVPR 2023 • Jun-Kun Chen, Jipeng Lyu, Yu-Xiong Wang
Our key insight is to exploit the explicit point cloud representation as the underlying structure to construct NeRFs, inspired by the intuitive interpretation of NeRF rendering as a process that projects or "plots" the associated 3D point cloud to a 2D image plane.
no code implementations • ICCV 2023 • Yuanyi Zhong, Haoran Tang, Jun-Kun Chen, Yu-Xiong Wang
Though self-supervised contrastive learning (CL) has shown its potential to achieve state-of-the-art accuracy without any supervision, its behavior still remains under investigated by academia.
1 code implementation • 11 Aug 2022 • Jun-Kun Chen, Yu-Xiong Wang
Being able to learn an effective semantic representation directly on raw point clouds has become a central topic in 3D understanding.