Search Results for author: An Xiao

Found 7 papers, 5 papers with code

Transformer in Transformer

12 code implementations NeurIPS 2021 Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, Yunhe Wang

In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT).

Fine-Grained Image Classification Sentence

Augmented Shortcuts for Vision Transformers

4 code implementations NeurIPS 2021 Yehui Tang, Kai Han, Chang Xu, An Xiao, Yiping Deng, Chao Xu, Yunhe Wang

Transformer models have achieved great progress on computer vision tasks recently.

Circumventing Outliers of AutoAugment with Knowledge Distillation

1 code implementation ECCV 2020 Longhui Wei, An Xiao, Lingxi Xie, Xin Chen, Xiaopeng Zhang, Qi Tian

AutoAugment has been a powerful algorithm that improves the accuracy of many vision tasks, yet it is sensitive to the operator space as well as hyper-parameters, and an improper setting may degenerate network optimization.

Data Augmentation General Classification +2

A Survey on Visual Transformer

no code implementations23 Dec 2020 Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, Zhaohui Yang, Yiman Zhang, DaCheng Tao

Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism.

Image Classification Inductive Bias

Greedy Network Enlarging

1 code implementation31 Jul 2021 Chuanjian Liu, Kai Han, An Xiao, Yiping Deng, Wei zhang, Chunjing Xu, Yunhe Wang

Recent studies on deep convolutional neural networks present a simple paradigm of architecture design, i. e., models with more MACs typically achieve better accuracy, such as EfficientNet and RegNet.

Network Amplification With Efficient MACs Allocation

2 code implementations Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2022 Chuanjian Liu, Kai Han, An Xiao, Ying Nie, Wei zhang, Yunhe Wang

In particular, the proposed method is used to enlarge models sourced by GhostNet, we achieve state-of-the-art 80. 9% and 84. 3% ImageNet top-1 accuracies under the setting of 600M and 4. 4B MACs, respectively.

Cannot find the paper you are looking for? You can Submit a new open access paper.