Search Results for author: Kaisheng Ma

Found 45 papers, 26 papers with code

Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image

1 code implementation30 May 2024 Kailu Wu, Fangfu Liu, Zhihan Cai, Runjie Yan, HanYang Wang, Yating Hu, Yueqi Duan, Kaisheng Ma

In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability.

Image to 3D Single-View 3D Reconstruction +1

Flow Score Distillation for Diverse Text-to-3D Generation

no code implementations16 May 2024 Runjie Yan, Kailu Wu, Kaisheng Ma

In this paper, we discover that the Denoise Diffusion Implicit Models (DDIM) generation process (\ie PF-ODE) can be succinctly expressed using an analogue of SDS loss.

3D Generation Diversity +1

Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation

1 code implementation CVPR 2024 HongWei Yan, Liyuan Wang, Kaisheng Ma, Yi Zhong

However, a notable gap from CL to OCL stems from the additional overfitting-underfitting dilemma associated with the use of rehearsal buffers: the inadequate learning of new training samples (underfitting) and the repeated learning of a few old training samples (overfitting).

Continual Learning Knowledge Distillation

ShapeLLM: Universal 3D Object Understanding for Embodied Interaction

3 code implementations27 Feb 2024 Zekun Qi, Runpei Dong, Shaochen Zhang, Haoran Geng, Chunrui Han, Zheng Ge, Li Yi, Kaisheng Ma

This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM) designed for embodied interaction, exploring a universal 3D object understanding with 3D point clouds and languages.

3D geometry 3D Object Captioning +12

DreamLLM: Synergistic Multimodal Comprehension and Creation

1 code implementation20 Sep 2023 Runpei Dong, Chunrui Han, Yuang Peng, Zekun Qi, Zheng Ge, Jinrong Yang, Liang Zhao, Jianjian Sun, HongYu Zhou, Haoran Wei, Xiangwen Kong, Xiangyu Zhang, Kaisheng Ma, Li Yi

This paper presents DreamLLM, a learning framework that first achieves versatile Multimodal Large Language Models (MLLMs) empowered with frequently overlooked synergy between multimodal comprehension and creation.

multimodal generation Visual Question Answering +2

VPP: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation

2 code implementations NeurIPS 2023 Zekun Qi, Muzhou Yu, Runpei Dong, Kaisheng Ma

VPP leverages structured voxel representation in the proposed Voxel Semantic Generator and the sparsity of unstructured point representation in the Point Upsampler, enabling efficient generation of multi-category objects.

3D Generation 8k

Revisiting Data Augmentation in Model Compression: An Empirical and Comprehensive Study

no code implementations22 May 2023 Muzhou Yu, Linfeng Zhang, Kaisheng Ma

In this paper, we revisit the usage of data augmentation in model compression and give a comprehensive study on the relation between model sizes and their optimal data augmentation policy.

Data Augmentation Knowledge Distillation +2

CORSD: Class-Oriented Relational Self Distillation

no code implementations28 Apr 2023 Muzhou Yu, Sia Huat Tan, Kailu Wu, Runpei Dong, Linfeng Zhang, Kaisheng Ma

Knowledge distillation conducts an effective model compression method while holding some limitations:(1) the feature based distillation methods only focus on distilling the feature map but are lack of transferring the relation of data examples; (2) the relational distillation methods are either limited to the handcrafted functions for relation extraction, such as L2 norm, or weak in inter- and intra- class relation modeling.

Knowledge Distillation Model Compression +2

CLIP-FO3D: Learning Free Open-world 3D Scene Representations from 2D Dense CLIP

no code implementations8 Mar 2023 Junbo Zhang, Runpei Dong, Kaisheng Ma

Training a 3D scene understanding model requires complicated human annotations, which are laborious to collect and result in a model only encoding close-set object semantics.

Scene Understanding Semantic Segmentation

Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid Learning in RNNs

1 code implementation7 Feb 2023 Yu Duan, Zhongfan Jia, Qian Li, Yi Zhong, Kaisheng Ma

Comparing different plasticity rules under the same framework shows that Hebbian plasticity is well-suited for several memory and associative learning tasks; however, it is outperformed by gradient-based plasticity on few-shot regression tasks which require the model to infer the underlying mapping.

Few-Shot Learning

Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining

5 code implementations5 Feb 2023 Zekun Qi, Runpei Dong, Guofan Fan, Zheng Ge, Xiangyu Zhang, Kaisheng Ma, Li Yi

This motivates us to learn 3D representations by sharing the merits of both paradigms, which is non-trivial due to the pattern difference between the two paradigms.

3D Point Cloud Linear Classification Decoder +3

Tiny Updater: Towards Efficient Neural Network-Driven Software Updating

no code implementations ICCV 2023 Linfeng Zhang, Kaisheng Ma

Significant advancements have been accomplished with deep neural networks in diverse visual tasks, which have substantially elevated their deployment in edge device software.

Efficient Neural Network Image Classification +4

Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?

4 code implementations16 Dec 2022 Runpei Dong, Zekun Qi, Linfeng Zhang, Junbo Zhang, Jianjian Sun, Zheng Ge, Li Yi, Kaisheng Ma

The success of deep learning heavily relies on large-scale data with comprehensive labels, which is more expensive and time-consuming to fetch in 3D compared to 2D images or natural languages.

Few-Shot 3D Point Cloud Classification Knowledge Distillation +1

Language-Assisted 3D Feature Learning for Semantic Scene Understanding

1 code implementation25 Nov 2022 Junbo Zhang, Guofan Fan, Guanghan Wang, Zhengyuan Su, Kaisheng Ma, Li Yi

To guide 3D feature learning toward important geometric attributes and scene context, we explore the help of textual scene descriptions.

Descriptive Instance Segmentation +5

LW-ISP: A Lightweight Model with ISP and Deep Learning

no code implementations8 Oct 2022 Hongyang Chen, Kaisheng Ma

The deep learning (DL)-based methods of low-level tasks have many advantages over the traditional camera in terms of hardware prospects, error accumulation and imaging effects.

Deep Learning Image Denoising

Contrastive Deep Supervision

1 code implementation12 Jul 2022 Linfeng Zhang, Xin Chen, Junbo Zhang, Runpei Dong, Kaisheng Ma

The success of deep learning is usually accompanied by the growth in neural network depth.

Contrastive Learning Fine-Grained Image Classification +3

Rethinking the Augmentation Module in Contrastive Learning: Learning Hierarchical Augmentation Invariance with Expanded Views

1 code implementation CVPR 2022 Junbo Zhang, Kaisheng Ma

A data augmentation module is utilized in contrastive learning to transform the given data example into two views, which is considered essential and irreplaceable.

Contrastive Learning Data Augmentation

PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection

1 code implementation CVPR 2023 Linfeng Zhang, Runpei Dong, Hung-Shuo Tai, Kaisheng Ma

The remarkable breakthroughs in point cloud representation learning have boosted their usage in real-world applications such as self-driving cars and virtual reality.

3D Object Detection Knowledge Distillation +4

Learn from Unpaired Data for Image Restoration: A Variational Bayes Approach

1 code implementation21 Apr 2022 Dihan Zheng, Xiaowen Zhang, Kaisheng Ma, Chenglong Bao

Current approaches aim at generating synthesized training data from unpaired samples by exploring the relationship between the corrupted and clean data.

Image Denoising Image Restoration +3

Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation

no code implementations CVPR 2022 Linfeng Zhang, Xin Chen, Xiaobing Tu, Pengfei Wan, Ning Xu, Kaisheng Ma

Instead of directly distilling the generated images of teachers, wavelet knowledge distillation first decomposes the images into different frequency bands with discrete wavelet transformation and then only distills the high frequency bands.

Image-to-Image Translation Knowledge Distillation +1

Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks

1 code implementation30 Dec 2021 Runpei Dong, Zhanhong Tan, Mengdi Wu, Linfeng Zhang, Kaisheng Ma

Besides, an efficient deployment flow for the mobile CPU is developed, achieving up to 7. 46$\times$ inference acceleration on an octa-core ARM CPU.

Image Classification Model Compression +3

Multi-Glimpse Network: A Robust and Efficient Classification Architecture based on Recurrent Downsampled Attention

1 code implementation3 Nov 2021 Sia Huat Tan, Runpei Dong, Kaisheng Ma

Inspired by this observation, we propose an end-to-end trainable Multi-Glimpse Network (MGNet) which aims to tackle the challenges of high computation and the lack of robustness based on recurrent downsampled attention mechanism.

AFEC: Active Forgetting of Negative Transfer in Continual Learning

1 code implementation NeurIPS 2021 Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, Yi Zhong

Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative.

Continual Learning Transfer Learning

Learning From Unpaired Data: A Variational Bayes Approach

no code implementations29 Sep 2021 Dihan Zheng, Xiaowen Zhang, Kaisheng Ma, Chenglong Bao

Collecting the paired training data is a difficult task in practice, but the unpaired samples broadly exist.

Image Denoising Super-Resolution +1

Not All Regions are Worthy to be Distilled: Region-aware Knowledge Distillation Towards Efficient Image-to-Image Translation

no code implementations29 Sep 2021 Linfeng Zhang, Kaisheng Ma

To tackle this challenge, in this paper, we propose Region-aware Knowledge Distillation which first localizes the crucial regions in the images with attention mechanism.

Contrastive Learning Image-to-Image Translation +2

Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors

1 code implementation ICLR 2021 Linfeng Zhang, Kaisheng Ma

In this paper, we suggest that the failure of knowledge distillation on object detection is mainly caused by two reasons: (1) the imbalance between pixels of foreground and background and (2) lack of distillation on the relation between different pixels.

Image Classification Knowledge Distillation +3

An Unsupervised Deep Learning Approach for Real-World Image Denoising

1 code implementation ICLR 2021 Dihan Zheng, Sia Huat Tan, Xiaowen Zhang, Zuoqiang Shi, Kaisheng Ma, Chenglong Bao

In the real-world case, the noise distribution is so complex that the simplified additive white Gaussian (AWGN) assumption rarely holds, which significantly deteriorates the Gaussian denoisers' performance.

Decoder Deep Learning +1

Task-Oriented Feature Distillation

1 code implementation NeurIPS 2020 Linfeng Zhang, Yukang Shi, Zuoqiang Shi, Kaisheng Ma, Chenglong Bao

Moreover, an orthogonal loss is applied to the feature resizing layer in TOFD to improve the performance of knowledge distillation.

3D Classification General Classification +2

An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices

no code implementations ECCV 2020 Xiaolong Ma, Wei Niu, Tianyun Zhang, Sijia Liu, Sheng Lin, Hongjia Li, Xiang Chen, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang

Weight pruning has been widely acknowledged as a straightforward and effective method to eliminate redundancy in Deep Neural Networks (DNN), thereby achieving acceleration on various platforms.

Code Generation Compiler Optimization

Exploring Frequency Domain Interpretation of Convolutional Neural Networks

no code implementations27 Nov 2019 Zhongfan Jia, Chenglong Bao, Kaisheng Ma

To the best of our knowledge, there is no study on the interpretation of modern CNNs from the perspective of the frequency proportion of filters.

PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile Devices

no code implementations6 Sep 2019 Xiaolong Ma, Fu-Ming Guo, Wei Niu, Xue Lin, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang

Model compression techniques on Deep Neural Network (DNN) have been widely acknowledged as an effective way to achieve acceleration on a variety of platforms, and DNN weight pruning is a straightforward and effective method.

Model Compression

Non-Structured DNN Weight Pruning -- Is It Beneficial in Any Platform?

no code implementations3 Jul 2019 Xiaolong Ma, Sheng Lin, Shaokai Ye, Zhezhi He, Linfeng Zhang, Geng Yuan, Sia Huat Tan, Zhengang Li, Deliang Fan, Xuehai Qian, Xue Lin, Kaisheng Ma, Yanzhi Wang

Based on the proposed comparison framework, with the same accuracy and quantization, the results show that non-structrued pruning is not competitive in terms of both storage and computation efficiency.

Model Compression Quantization

Brain-inspired reverse adversarial examples

no code implementations28 May 2019 Shaokai Ye, Sia Huat Tan, Kaidi Xu, Yanzhi Wang, Chenglong Bao, Kaisheng Ma

On contrast, current state-of-the-art deep learning approaches heavily depend on the variety of training samples and the capacity of the network.

Quantization

Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation

1 code implementation ICCV 2019 Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, Kaisheng Ma

Different from traditional knowledge distillation - a knowledge transformation methodology among networks, which forces student neural networks to approximate the softmax layer outputs of pre-trained teacher neural networks, the proposed self distillation framework distills knowledge within network itself.

Knowledge Distillation

Toward Extremely Low Bit and Lossless Accuracy in DNNs with Progressive ADMM

no code implementations2 May 2019 Sheng Lin, Xiaolong Ma, Shaokai Ye, Geng Yuan, Kaisheng Ma, Yanzhi Wang

Weight quantization is one of the most important techniques of Deep Neural Networks (DNNs) model compression method.

Model Compression Quantization

Adversarial Robustness vs Model Compression, or Both?

1 code implementation29 Mar 2019 Shaokai Ye, Kaidi Xu, Sijia Liu, Jan-Henrik Lambrechts, huan zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin

Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.

Adversarial Robustness Model Compression +1

StructADMM: A Systematic, High-Efficiency Framework of Structured Weight Pruning for DNNs

1 code implementation29 Jul 2018 Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Xiaolong Ma, Ning Liu, Linfeng Zhang, Jian Tang, Kaisheng Ma, Xue Lin, Makan Fardad, Yanzhi Wang

Without loss of accuracy on the AlexNet model, we achieve 2. 58X and 3. 65X average measured speedup on two GPUs, clearly outperforming the prior work.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.