no code implementations • 5 Oct 2024 • Linfeng Zhang, Kaisheng Ma
Significant advancements in image generation have been made with diffusion models.
1 code implementation • 30 May 2024 • Kailu Wu, Fangfu Liu, Zhihan Cai, Runjie Yan, HanYang Wang, Yating Hu, Yueqi Duan, Kaisheng Ma
In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability.
Ranked #1 on Single-View 3D Reconstruction on GSO
no code implementations • 16 May 2024 • Runjie Yan, Kailu Wu, Kaisheng Ma
In this paper, we discover that the Denoise Diffusion Implicit Models (DDIM) generation process (\ie PF-ODE) can be succinctly expressed using an analogue of SDS loss.
1 code implementation • CVPR 2024 • HongWei Yan, Liyuan Wang, Kaisheng Ma, Yi Zhong
However, a notable gap from CL to OCL stems from the additional overfitting-underfitting dilemma associated with the use of rehearsal buffers: the inadequate learning of new training samples (underfitting) and the repeated learning of a few old training samples (overfitting).
3 code implementations • 27 Feb 2024 • Zekun Qi, Runpei Dong, Shaochen Zhang, Haoran Geng, Chunrui Han, Zheng Ge, Li Yi, Kaisheng Ma
This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM) designed for embodied interaction, exploring a universal 3D object understanding with 3D point clouds and languages.
Ranked #1 on 3D Question Answering (3D-QA) on 3D MM-Vet
1 code implementation • 20 Sep 2023 • Runpei Dong, Chunrui Han, Yuang Peng, Zekun Qi, Zheng Ge, Jinrong Yang, Liang Zhao, Jianjian Sun, HongYu Zhou, Haoran Wei, Xiangwen Kong, Xiangyu Zhang, Kaisheng Ma, Li Yi
This paper presents DreamLLM, a learning framework that first achieves versatile Multimodal Large Language Models (MLLMs) empowered with frequently overlooked synergy between multimodal comprehension and creation.
Ranked #5 on Visual Question Answering on MMBench
2 code implementations • NeurIPS 2023 • Zekun Qi, Muzhou Yu, Runpei Dong, Kaisheng Ma
VPP leverages structured voxel representation in the proposed Voxel Semantic Generator and the sparsity of unstructured point representation in the Point Upsampler, enabling efficient generation of multi-category objects.
1 code implementation • 31 May 2023 • Guofan Fan, Zekun Qi, Wenkai Shi, Kaisheng Ma
Geometry and color information provided by the point clouds are both crucial for 3D scene understanding.
Ranked #1 on Unsupervised 3D Semantic Segmentation on ScanNetV2
no code implementations • 22 May 2023 • Muzhou Yu, Linfeng Zhang, Kaisheng Ma
In this paper, we revisit the usage of data augmentation in model compression and give a comprehensive study on the relation between model sizes and their optimal data augmentation policy.
no code implementations • 28 Apr 2023 • Muzhou Yu, Sia Huat Tan, Kailu Wu, Runpei Dong, Linfeng Zhang, Kaisheng Ma
Knowledge distillation conducts an effective model compression method while holding some limitations:(1) the feature based distillation methods only focus on distilling the feature map but are lack of transferring the relation of data examples; (2) the relational distillation methods are either limited to the handcrafted functions for relation extraction, such as L2 norm, or weak in inter- and intra- class relation modeling.
no code implementations • 8 Mar 2023 • Junbo Zhang, Runpei Dong, Kaisheng Ma
Training a 3D scene understanding model requires complicated human annotations, which are laborious to collect and result in a model only encoding close-set object semantics.
1 code implementation • 7 Feb 2023 • Yu Duan, Zhongfan Jia, Qian Li, Yi Zhong, Kaisheng Ma
Comparing different plasticity rules under the same framework shows that Hebbian plasticity is well-suited for several memory and associative learning tasks; however, it is outperformed by gradient-based plasticity on few-shot regression tasks which require the model to infer the underlying mapping.
5 code implementations • 5 Feb 2023 • Zekun Qi, Runpei Dong, Guofan Fan, Zheng Ge, Xiangyu Zhang, Kaisheng Ma, Li Yi
This motivates us to learn 3D representations by sharing the merits of both paradigms, which is non-trivial due to the pattern difference between the two paradigms.
Ranked #1 on Zero-Shot Transfer 3D Point Cloud Classification on ModelNet10 (using extra training data)
no code implementations • ICCV 2023 • Linfeng Zhang, Kaisheng Ma
Significant advancements have been accomplished with deep neural networks in diverse visual tasks, which have substantially elevated their deployment in edge device software.
4 code implementations • 16 Dec 2022 • Runpei Dong, Zekun Qi, Linfeng Zhang, Junbo Zhang, Jianjian Sun, Zheng Ge, Li Yi, Kaisheng Ma
The success of deep learning heavily relies on large-scale data with comprehensive labels, which is more expensive and time-consuming to fetch in 3D compared to 2D images or natural languages.
Ranked #7 on Few-Shot 3D Point Cloud Classification on ModelNet40 10-way (10-shot) (using extra training data)
Few-Shot 3D Point Cloud Classification Knowledge Distillation +1
1 code implementation • 25 Nov 2022 • Junbo Zhang, Guofan Fan, Guanghan Wang, Zhengyuan Su, Kaisheng Ma, Li Yi
To guide 3D feature learning toward important geometric attributes and scene context, we explore the help of textual scene descriptions.
no code implementations • 14 Nov 2022 • Linfeng Zhang, Yukang Shi, Hung-Shuo Tai, Zhipeng Zhang, Yuan He, Ke Wang, Kaisheng Ma
Detecting 3D objects from multi-view images is a fundamental problem in 3D computer vision.
no code implementations • 8 Oct 2022 • Hongyang Chen, Kaisheng Ma
The deep learning (DL)-based methods of low-level tasks have many advantages over the traditional camera in terms of hardware prospects, error accumulation and imaging effects.
1 code implementation • 12 Jul 2022 • Linfeng Zhang, Xin Chen, Junbo Zhang, Runpei Dong, Kaisheng Ma
The success of deep learning is usually accompanied by the growth in neural network depth.
1 code implementation • CVPR 2022 • Junbo Zhang, Kaisheng Ma
A data augmentation module is utilized in contrastive learning to transform the given data example into two views, which is considered essential and irreplaceable.
no code implementations • 25 May 2022 • Linfeng Zhang, Xin Chen, Runpei Dong, Kaisheng Ma
In this paper, we propose Region-aware Knowledge Distillation ReKo to compress image-to-image translation models.
1 code implementation • CVPR 2023 • Linfeng Zhang, Runpei Dong, Hung-Shuo Tai, Kaisheng Ma
The remarkable breakthroughs in point cloud representation learning have boosted their usage in real-world applications such as self-driving cars and virtual reality.
1 code implementation • 21 Apr 2022 • Dihan Zheng, Xiaowen Zhang, Kaisheng Ma, Chenglong Bao
Current approaches aim at generating synthesized training data from unpaired samples by exploring the relationship between the corrupted and clean data.
1 code implementation • 14 Apr 2022 • Dihan Zheng, Chenglong Bao, Zuoqiang Shi, Haibin Ling, Kaisheng Ma
The Chan-Vese (CV) model is a classic region-based method in image segmentation.
no code implementations • CVPR 2022 • Linfeng Zhang, Xin Chen, Xiaobing Tu, Pengfei Wan, Ning Xu, Kaisheng Ma
Instead of directly distilling the generated images of teachers, wavelet knowledge distillation first decomposes the images into different frequency bands with discrete wavelet transformation and then only distills the high frequency bands.
1 code implementation • 30 Dec 2021 • Runpei Dong, Zhanhong Tan, Mengdi Wu, Linfeng Zhang, Kaisheng Ma
Besides, an efficient deployment flow for the mobile CPU is developed, achieving up to 7. 46$\times$ inference acceleration on an octa-core ARM CPU.
1 code implementation • 3 Nov 2021 • Sia Huat Tan, Runpei Dong, Kaisheng Ma
Inspired by this observation, we propose an end-to-end trainable Multi-Glimpse Network (MGNet) which aims to tackle the challenges of high computation and the lack of robustness based on recurrent downsampled attention mechanism.
1 code implementation • NeurIPS 2021 • Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, Yi Zhong
Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative.
no code implementations • 29 Sep 2021 • Dihan Zheng, Xiaowen Zhang, Kaisheng Ma, Chenglong Bao
Collecting the paired training data is a difficult task in practice, but the unpaired samples broadly exist.
no code implementations • 29 Sep 2021 • Linfeng Zhang, Kaisheng Ma
To tackle this challenge, in this paper, we propose Region-aware Knowledge Distillation which first localizes the crucial regions in the images with attention mechanism.
1 code implementation • ICLR 2021 • Linfeng Zhang, Kaisheng Ma
In this paper, we suggest that the failure of knowledge distillation on object detection is mainly caused by two reasons: (1) the imbalance between pixels of foreground and background and (2) lack of distillation on the relation between different pixels.
1 code implementation • ICLR 2021 • Dihan Zheng, Sia Huat Tan, Xiaowen Zhang, Zuoqiang Shi, Kaisheng Ma, Chenglong Bao
In the real-world case, the noise distribution is so complex that the simplified additive white Gaussian (AWGN) assumption rarely holds, which significantly deteriorates the Gaussian denoisers' performance.
1 code implementation • NeurIPS 2020 • Linfeng Zhang, Yukang Shi, Zuoqiang Shi, Kaisheng Ma, Chenglong Bao
Moreover, an orthogonal loss is applied to the feature resizing layer in TOFD to improve the performance of knowledge distillation.
no code implementations • 11 Feb 2020 • Zhanhong Tan, Jiebo Song, Xiaolong Ma, Sia-Huat Tan, Hongyang Chen, Yuanqing Miao, Yi-Fu Wu, Shaokai Ye, Yanzhi Wang, Dehui Li, Kaisheng Ma
Weight pruning is a powerful technique to realize model compression.
no code implementations • ECCV 2020 • Xiaolong Ma, Wei Niu, Tianyun Zhang, Sijia Liu, Sheng Lin, Hongjia Li, Xiang Chen, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang
Weight pruning has been widely acknowledged as a straightforward and effective method to eliminate redundancy in Deep Neural Networks (DNN), thereby achieving acceleration on various platforms.
1 code implementation • CVPR 2020 • Shaokai Ye, Kailu Wu, Mu Zhou, Yunfei Yang, Sia Huat Tan, Kaidi Xu, Jiebo Song, Chenglong Bao, Kaisheng Ma
Existing domain adaptation methods aim at learning features that can be generalized among domains.
Ranked #4 on Domain Adaptation on USPS-to-MNIST
no code implementations • 27 Nov 2019 • Zhongfan Jia, Chenglong Bao, Kaisheng Ma
To the best of our knowledge, there is no study on the interpretation of modern CNNs from the perspective of the frequency proportion of filters.
no code implementations • 6 Sep 2019 • Xiaolong Ma, Fu-Ming Guo, Wei Niu, Xue Lin, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang
Model compression techniques on Deep Neural Network (DNN) have been widely acknowledged as an effective way to achieve acceleration on a variety of platforms, and DNN weight pruning is a straightforward and effective method.
no code implementations • 3 Jul 2019 • Xiaolong Ma, Sheng Lin, Shaokai Ye, Zhezhi He, Linfeng Zhang, Geng Yuan, Sia Huat Tan, Zhengang Li, Deliang Fan, Xuehai Qian, Xue Lin, Kaisheng Ma, Yanzhi Wang
Based on the proposed comparison framework, with the same accuracy and quantization, the results show that non-structrued pruning is not competitive in terms of both storage and computation efficiency.
no code implementations • 28 May 2019 • Shaokai Ye, Sia Huat Tan, Kaidi Xu, Yanzhi Wang, Chenglong Bao, Kaisheng Ma
On contrast, current state-of-the-art deep learning approaches heavily depend on the variety of training samples and the capacity of the network.
1 code implementation • NeurIPS 2019 • Linfeng Zhang, Zhanhong Tan, Jiebo Song, Jingwei Chen, Chenglong Bao, Kaisheng Ma
Remarkable achievements have been attained by deep neural networks in various applications.
1 code implementation • ICCV 2019 • Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, Kaisheng Ma
Different from traditional knowledge distillation - a knowledge transformation methodology among networks, which forces student neural networks to approximate the softmax layer outputs of pre-trained teacher neural networks, the proposed self distillation framework distills knowledge within network itself.
no code implementations • 2 May 2019 • Sheng Lin, Xiaolong Ma, Shaokai Ye, Geng Yuan, Kaisheng Ma, Yanzhi Wang
Weight quantization is one of the most important techniques of Deep Neural Networks (DNNs) model compression method.
1 code implementation • 29 Mar 2019 • Shaokai Ye, Kaidi Xu, Sijia Liu, Jan-Henrik Lambrechts, huan zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin
Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.
1 code implementation • 29 Jul 2018 • Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Xiaolong Ma, Ning Liu, Linfeng Zhang, Jian Tang, Kaisheng Ma, Xue Lin, Makan Fardad, Yanzhi Wang
Without loss of accuracy on the AlexNet model, we achieve 2. 58X and 3. 65X average measured speedup on two GPUs, clearly outperforming the prior work.