Current large-scale diffusion models represent a giant leap forward in conditional image synthesis, capable of interpreting diverse cues like text, human poses, and edges.
It is also effective for self-supervised learning (e. g., MAE).
Our method achieves the state-of-the-art performance on ImageNet, 80. 7% top-1 accuracy with 194M FLOPs.
Ranked #569 on Image Classification on ImageNet
Based on this discovery, we propose a new training method called FixNorm, which discards weight decay and directly controls the two mechanisms.
Neural network architecture design mostly focuses on the new convolutional operator or special topological structure of network block, little attention is drawn to the configuration of stacking each block, called Block Stacking Style (BSS).
The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization.
Ranked #586 on Image Classification on ImageNet
Automatic neural architecture search techniques are becoming increasingly important in machine learning area.
To verify the scalability, we also apply DyNet on segmentation task, the results show that DyNet can reduces 69. 3% FLOPs while maintaining the Mean IoU on segmentation task.
Motivated by the fact that human-designed networks are elegant in topology with a fast inference speed, we propose a mirror stimuli function inspired by biological cognition theory to extract the abstract topological knowledge of an expert human-design network (ResNeXt).
Inspired by the relevant concept in neural science literature, we propose Synaptic Pruning: a data-driven method to prune connections between input and output feature maps with a newly proposed class of parameters called Synaptic Strength.
The block-wise generation brings unique advantages: (1) it yields state-of-the-art results in comparison to the hand-crafted networks on image classification, particularly, the best network generated by BlockQNN achieves 2. 35% top-1 error rate on CIFAR-10.