Search Results for author: Zhao Zhong

Found 15 papers, 4 papers with code

PanGu-Draw: Advancing Resource-Efficient Text-to-Image Synthesis with Time-Decoupled Training and Reusable Coop-Diffusion

no code implementations27 Dec 2023 Guansong Lu, Yuanfan Guo, Jianhua Han, Minzhe Niu, Yihan Zeng, Songcen Xu, Zeyi Huang, Zhao Zhong, Wei zhang, Hang Xu

Current large-scale diffusion models represent a giant leap forward in conditional image synthesis, capable of interpreting diverse cues like text, human poses, and edges.

Computational Efficiency Denoising +1

Learning Low-Rank Representations for Model Compression

no code implementations21 Nov 2022 Zezhou Zhu, Yucong Zhou, Zhao Zhong

Vector Quantization (VQ) is an appealing model compression method to obtain a tiny model with less accuracy loss.

Clustering Model Compression +1

Collaboration of Experts: Achieving 80% Top-1 Accuracy on ImageNet with 100M FLOPs

no code implementations8 Jul 2021 Yikang Zhang, Zhuo Chen, Zhao Zhong

Our method achieves the state-of-the-art performance on ImageNet, 80. 7% top-1 accuracy with 194M FLOPs.

Image Classification

Learning specialized activation functions with the Piecewise Linear Unit

no code implementations ICCV 2021 Yucong Zhou, Zezhou Zhu, Zhao Zhong

It can learn specialized activation functions and achieves SOTA performance on large-scale datasets like ImageNet and COCO.

FixNorm: Dissecting Weight Decay for Training Deep Neural Networks

no code implementations29 Mar 2021 Yucong Zhou, Yunxiao Sun, Zhao Zhong

Based on this discovery, we propose a new training method called FixNorm, which discards weight decay and directly controls the two mechanisms.

AutoBSS: An Efficient Algorithm for Block Stacking Style Search

no code implementations NeurIPS 2020 Yikang Zhang, Jian Zhang, Zhao Zhong

Neural network architecture design mostly focuses on the new convolutional operator or special topological structure of network block, little attention is drawn to the configuration of stacking each block, called Block Stacking Style (BSS).

AutoML Bayesian Optimization +6

DyNet: Dynamic Convolution for Accelerating Convolutional Neural Networks

no code implementations22 Apr 2020 Yikang Zhang, Jian Zhang, Qiang Wang, Zhao Zhong

On one hand, we can reduce the computation cost remarkably while maintaining the performance.

Adversarial AutoAugment

no code implementations ICLR 2020 Xin-Yu Zhang, Qiang Wang, Jian Zhang, Zhao Zhong

The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization.

Data Augmentation Image Classification

BETANAS: BalancEd TrAining and selective drop for Neural Architecture Search

no code implementations24 Dec 2019 Muyuan Fang, Qiang Wang, Zhao Zhong

Automatic neural architecture search techniques are becoming increasingly important in machine learning area.

Neural Architecture Search

DyNet: Dynamic Convolution for Accelerating Convolution Neural Networks

no code implementations25 Sep 2019 Kane Zhang, Jian Zhang, Qiang Wang, Zhao Zhong

To verify the scalability, we also apply DyNet on segmentation task, the results show that DyNet can reduces 69. 3% FLOPs while maintaining the Mean IoU on segmentation task.

IRLAS: Inverse Reinforcement Learning for Architecture Search

1 code implementation CVPR 2019 Minghao Guo, Zhao Zhong, Wei Wu, Dahua Lin, Junjie Yan

Motivated by the fact that human-designed networks are elegant in topology with a fast inference speed, we propose a mirror stimuli function inspired by biological cognition theory to extract the abstract topological knowledge of an expert human-design network (ResNeXt).

Neural Architecture Search reinforcement-learning +1

Synaptic Strength For Convolutional Neural Network

no code implementations NeurIPS 2018 Chen Lin, Zhao Zhong, Wei Wu, Junjie Yan

Inspired by the relevant concept in neural science literature, we propose Synaptic Pruning: a data-driven method to prune connections between input and output feature maps with a newly proposed class of parameters called Synaptic Strength.

BlockQNN: Efficient Block-wise Neural Network Architecture Generation

2 code implementations16 Aug 2018 Zhao Zhong, Zichen Yang, Boyang Deng, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu

The block-wise generation brings unique advantages: (1) it yields state-of-the-art results in comparison to the hand-crafted networks on image classification, particularly, the best network generated by BlockQNN achieves 2. 35% top-1 error rate on CIFAR-10.

Image Classification Q-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.