no code implementations • 10 Apr 2024 • Fan Lu, Kwan-Yee Lin, Yan Xu, Hongsheng Li, Guang Chen, Changjun Jiang
(2) To handle the unbounded nature of urban scenes, we represent 3D scene with a Scalable Hash Grid structure, incrementally adapting to the growing scale of urban scenes.
1 code implementation • 3 Apr 2024 • Zehan Zheng, Fan Lu, Weiyi Xue, Guang Chen, Changjun Jiang
In light of this, we propose LiDAR4D, a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis.
1 code implementation • 21 Mar 2024 • Dingchen Yang, Bowen Cao, Guang Chen, Changjun Jiang
Multi-modal Large Language Models (MLLMs) demonstrate remarkable success across various vision-language tasks.
2 code implementations • 21 Mar 2024 • Sanqing Qu, Tianpei Zou, Florian Röhrbein, Cewu Lu, Guang Chen, DaCheng Tao, Changjun Jiang
GLC++ enhances the novel category clustering accuracy of GLC by 4. 3% in open-set scenarios on Office-Home.
1 code implementation • 7 Mar 2024 • Boyang Peng, Sanqing Qu, Yong Wu, Tianpei Zou, Lianghua He, Alois Knoll, Guang Chen, Changjun Jiang
In this paper, we target a practical setting where only a well-trained source model is available and investigate how we can realize IP protection.
2 code implementations • 6 Mar 2024 • Sanqing Qu, Tianpei Zou, Lianghua He, Florian Röhrbein, Alois Knoll, Guang Chen, Changjun Jiang
Besides, LEAD is also appealing in that it is complementary to most existing methods.
Ranked #1 on Universal Domain Adaptation on VisDA2017
no code implementations • 18 Jan 2024 • Cheng Wang, Chuwen Wang, Yu Zhao, Wang Zhang, Shirong Zeng, Ronghui Ning, Changjun Jiang
As a matter of facts, they act as the best tool to handle problems in complex systems where closed-form expressions are unavailable and the target distribution in the representation space is too complex to be fully represented by data-driven learning models, such as deep learning (DL) models.
no code implementations • 11 Dec 2023 • Jing Hou, Guang Chen, Ruiqi Zhang, Zhijun Li, Shangding Gu, Changjun Jiang
While existing parallel RL frameworks encompass a variety of RL algorithms and parallelization techniques, the excessively burdensome communication frameworks hinder the attainment of the hardware's limit for final throughput and training effects on a single desktop.
1 code implementation • 10 Nov 2023 • Yang Lei, Jiangtong Li, Ming Jiang, Junjie Hu, Dawei Cheng, Zhijun Ding, Changjun Jiang
Large language models (LLMs) have demonstrated great potential in the financial domain.
1 code implementation • 19 Sep 2023 • Jiangtong Li, Yuxuan Bian, Guoxuan Wang, Yang Lei, Dawei Cheng, Zhijun Ding, Changjun Jiang
The CFAPP is centered on large language models (LLMs) and augmented with additional modules to ensure multifaceted functionality in real-world application.
1 code implementation • ICCV 2023 • Fan Lu, Yan Xu, Guang Chen, Hongsheng Li, Kwan-Yee Lin, Changjun Jiang
To construct urban-level radiance fields efficiently, we design Deformable Neural Mesh Primitive~(DNMP), and propose to parameterize the entire scene with such primitives.
1 code implementation • CVPR 2023 • Zehan Zheng, Danni Wu, Ruisi Lu, Fan Lu, Guang Chen, Changjun Jiang
In light of these issues, we present NeuralPCI: an end-to-end 4D spatio-temporal Neural field for 3D Point Cloud Interpolation, which implicitly integrates multi-frame information to handle nonlinear large motions for both indoor and outdoor scenarios.
Ranked #1 on 3D Point Cloud Interpolation on NL-Drive
no code implementations • ICCV 2023 • Tianhang Wang, Guang Chen, Kai Chen, Zhengfa Liu, Bo Zhang, Alois Knoll, Changjun Jiang
To verify our algorithm, we conducted experiments on the V2X-Sim and OPV2V datasets.
1 code implementation • ICCV 2023 • Haotian Liu, Guang Chen, Sanqing Qu, Yanping Zhang, Zhijun Li, Alois Knoll, Changjun Jiang
In this paper, we argue that temporal continuity is a vital element of event-based optical flow and propose a novel Temporal Motion Aggregation (TMA) approach to unlock its potential.
no code implementations • CVPR 2023 • Sanqing Qu, Yingwei Pan, Guang Chen, Ting Yao, Changjun Jiang, Tao Mei
We validate the superiority of our MAD in a variety of single-DG scenarios with different modalities, including recognition on 1D texts, 2D images, 3D point clouds, and semantic segmentation on 2D images.
3 code implementations • CVPR 2023 • Sanqing Qu, Tianpei Zou, Florian Roehrbein, Cewu Lu, Guang Chen, DaCheng Tao, Changjun Jiang
We examine the superiority of our GLC on multiple benchmarks with different category shift scenarios, including partial-set, open-set, and open-partial-set DA.
Ranked #2 on Universal Domain Adaptation on VisDA2017
1 code implementation • 23 Nov 2022 • ZiHao Wang, Junli Wang, Changjun Jiang
Prior work performs the standard likelihood training for answer generation on the positive instances (involving correct answers).
no code implementations • 15 Jan 2021 • Ru Yang, Zhijun Ding, Changjun Jiang, Mengchu Zhou
The case study of a practical mobile payment system shows the effectiveness of the proposed method.