Search Results for author: Aojun Zhou

Found 22 papers, 15 papers with code

LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model

2 code implementations28 Apr 2023 Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, Yu Qiao

This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset.

Instruction Following Optical Character Recognition (OCR)

Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement

1 code implementation3 Apr 2023 Xiangyang Zhu, Renrui Zhang, Bowei He, Aojun Zhou, Dong Wang, Bin Zhao, Peng Gao

The popularity of Contrastive Language-Image Pre-training (CLIP) has propelled its application to diverse downstream vision tasks.

Few-Shot Learning

Omni-Dimensional Dynamic Convolution

1 code implementation ICLR 2022 Chao Li, Aojun Zhou, Anbang Yao

Learning a single static convolutional kernel in each convolutional layer is the common training paradigm of modern Convolutional Neural Networks (CNNs).

An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers

no code implementations12 Aug 2022 Chao Fang, Aojun Zhou, Zhongfeng Wang

(1) From algorithm perspective, we propose a sparsity inheritance mechanism along with an inherited dynamic pruning (IDP) method to obtain a series of N:M sparse candidate Transformers rapidly.

Model Compression

Group R-CNN for Weakly Semi-supervised Object Detection with Points

1 code implementation CVPR 2022 Shilong Zhang, Zhuoran Yu, Liyang Liu, Xinjiang Wang, Aojun Zhou, Kai Chen

The core of this task is to train a point-to-box regressor on well-labeled images that can be used to predict credible bounding boxes for each point annotation.

Object Detection Representation Learning +1

Pyramid Fusion Transformer for Semantic Segmentation

no code implementations11 Jan 2022 Zipeng Qin, Jianbo Liu, Xiaolin Zhang, Maoqing Tian, Aojun Zhou, Shuai Yi, Hongsheng Li

The recently proposed MaskFormer gives a refreshed perspective on the task of semantic segmentation: it shifts from the popular pixel-level classification paradigm to a mask-level classification method.

Semantic Segmentation

DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks

1 code implementation NeurIPS 2021 Wei Sun, Aojun Zhou, Sander Stuijk, Rob Wijnhoven, Andrew Oakleigh Nelson, Hongsheng Li, Henk Corporaal

However, the existing N:M algorithms only address the challenge of how to train N:M sparse neural networks in a uniform fashion (i. e. every layer has the same N:M sparsity) and suffer from a significant accuracy drop for high sparsity (i. e. when sparsity > 80\%).

Network Pruning

Incorporating Convolution Designs into Visual Transformers

2 code implementations ICCV 2021 Kun Yuan, Shaopeng Guo, Ziwei Liu, Aojun Zhou, Fengwei Yu, Wei Wu

Motivated by the success of Transformers in natural language processing (NLP) tasks, there emerge some attempts (e. g., ViT and DeiT) to apply Transformers to the vision domain.

Image Classification

Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch

4 code implementations ICLR 2021 Aojun Zhou, Yukun Ma, Junnan Zhu, Jianbo Liu, Zhijie Zhang, Kun Yuan, Wenxiu Sun, Hongsheng Li

In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network, which can maintain the advantages of both unstructured fine-grained sparsity and structured coarse-grained sparsity simultaneously on specifically designed GPUs.

Scale Calibrated Training: Improving Generalization of Deep Networks via Scale-Specific Normalization

no code implementations31 Aug 2019 Zhuoran Yu, Aojun Zhou, Yukun Ma, Yudian Li, Xiaohan Zhang, Ping Luo

Experiment results show that SCT improves accuracy of single Resnet-50 on ImageNet by 1. 7% and 11. 5% accuracy when testing on image sizes of 224 and 128 respectively.

Data Augmentation Image Classification +1

HBONet: Harmonious Bottleneck on Two Orthogonal Dimensions

1 code implementation ICCV 2019 Duo Li, Aojun Zhou, Anbang Yao

MobileNets, a class of top-performing convolutional neural network architectures in terms of accuracy and efficiency trade-off, are increasingly used in many resourceaware vision applications.

object-detection Object Detection +2

Deeply-supervised Knowledge Synergy

1 code implementation CVPR 2019 Dawei Sun, Anbang Yao, Aojun Zhou, Hao Zhao

Convolutional Neural Networks (CNNs) have become deeper and more complicated compared with the pioneering AlexNet.

General Classification Image Classification

Adversarial Robustness vs Model Compression, or Both?

1 code implementation29 Mar 2019 Shaokai Ye, Kaidi Xu, Sijia Liu, Jan-Henrik Lambrechts, huan zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin

Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.

Adversarial Robustness Model Compression +1

SnapQuant: A Probabilistic and Nested Parameterization for Binary Networks

no code implementations27 Sep 2018 Kuan Wang, Hao Zhao, Anbang Yao, Aojun Zhou, Dawei Sun, Yurong Chen

During the training phase, we generate binary weights on-the-fly since what we actually maintain is the policy network, and all the binary weights are used in a burn-after-reading style.

Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks

no code implementations CVPR 2018 Aojun Zhou, Anbang Yao, Kuan Wang, Yurong Chen

Through explicitly regularizing the loss perturbation and the weight approximation error in an incremental way, we show that such a new optimization method is theoretically reasonable and practically effective.

Quantization

Deep Neural Network Compression with Single and Multiple Level Quantization

1 code implementation6 Mar 2018 Yuhui Xu, Yongzhuang Wang, Aojun Zhou, Weiyao Lin, Hongkai Xiong

In this paper, we propose two novel network quantization approaches, single-level network quantization (SLQ) for high-bit quantization and multi-level network quantization (MLQ) for extremely low-bit quantization (ternary). We are the first to consider the network quantization from both width and depth level.

Neural Network Compression Quantization

Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights

3 code implementations10 Feb 2017 Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen

The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.