Search Results for author: Yehui Tang

Found 21 papers, 14 papers with code

Vision GNN: An Image is Worth Graph of Nodes

4 code implementations1 Jun 2022 Kai Han, Yunhe Wang, Jianyuan Guo, Yehui Tang, Enhua Wu

In this paper, we propose to represent the image as a graph structure and introduce a new Vision GNN (ViG) architecture to extract graph-level feature for visual tasks.

object-detection Object Detection

Source-Free Domain Adaptation via Distribution Estimation

no code implementations CVPR 2022 Ning Ding, Yixing Xu, Yehui Tang, Chao Xu, Yunhe Wang, DaCheng Tao

Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.

Domain Adaptation Privacy Preserving

From Quantum Graph Computing to Quantum Graph Learning: A Survey

no code implementations19 Feb 2022 Yehui Tang, Junchi Yan, Hancock Edwin

Quantum computing (QC) is a new computational paradigm whose foundations relate to quantum physics.

Graph Learning

PyramidTNT: Improved Transformer-in-Transformer Baselines with Pyramid Architecture

1 code implementation4 Jan 2022 Kai Han, Jianyuan Guo, Yehui Tang, Yunhe Wang

We hope this new baseline will be helpful to the further research and application of vision transformer.

An Image Patch is a Wave: Phase-Aware Vision MLP

4 code implementations CVPR 2022 Yehui Tang, Kai Han, Jianyuan Guo, Chang Xu, Yanxi Li, Chao Xu, Yunhe Wang

To dynamically aggregate tokens, we propose to represent each token as a wave function with two parts, amplitude and phase.

Image Classification object-detection +2

Hire-MLP: Vision MLP via Hierarchical Rearrangement

3 code implementations CVPR 2022 Jianyuan Guo, Yehui Tang, Kai Han, Xinghao Chen, Han Wu, Chao Xu, Chang Xu, Yunhe Wang

Previous vision MLPs such as MLP-Mixer and ResMLP accept linearly flattened image patches as input, making them inflexible for different input sizes and hard to capture spatial information.

Image Classification object-detection +2

Homogeneous Architecture Augmentation for Neural Predictor

1 code implementation ICCV 2021 Yuqiao Liu, Yehui Tang, Yanan sun

Specifically, a homogeneous architecture augmentation algorithm is proposed in HAAP to generate sufficient training data taking the use of homogeneous representation.

Neural Architecture Search

CMT: Convolutional Neural Networks Meet Vision Transformers

8 code implementations CVPR 2022 Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Xinghao Chen, Yunhe Wang, Chang Xu

Vision transformers have been successfully applied to image recognition tasks due to their ability to capture long-range dependencies within an image.

Augmented Shortcuts for Vision Transformers

3 code implementations NeurIPS 2021 Yehui Tang, Kai Han, Chang Xu, An Xiao, Yiping Deng, Chao Xu, Yunhe Wang

Transformer models have achieved great progress on computer vision tasks recently.

ReNAS: Relativistic Evaluation of Neural Architecture Search

2 code implementations CVPR 2021 Yixing Xu, Yunhe Wang, Kai Han, Yehui Tang, Shangling Jui, Chunjing Xu, Chang Xu

An effective and efficient architecture performance evaluation scheme is essential for the success of Neural Architecture Search (NAS).

Neural Architecture Search

Patch Slimming for Efficient Vision Transformers

no code implementations CVPR 2022 Yehui Tang, Kai Han, Yunhe Wang, Chang Xu, Jianyuan Guo, Chao Xu, DaCheng Tao

We first identify the effective patches in the last layer and then use them to guide the patch selection process of previous layers.

Vision Transformer Pruning

2 code implementations17 Apr 2021 Mingjian Zhu, Yehui Tang, Kai Han

Vision transformer has achieved competitive performance on a variety of computer vision applications.

Manifold Regularized Dynamic Network Pruning

2 code implementations CVPR 2021 Yehui Tang, Yunhe Wang, Yixing Xu, Yiping Deng, Chao Xu, DaCheng Tao, Chang Xu

Then, the manifold relationship between instances and the pruned sub-networks will be aligned in the training procedure.

Network Pruning

Learning Frequency Domain Approximation for Binary Neural Networks

1 code implementation NeurIPS 2021 Yixing Xu, Kai Han, Chang Xu, Yehui Tang, Chunjing Xu, Yunhe Wang

Binary neural networks (BNNs) represent original full-precision weights and activations into 1-bit with sign function.

A Survey on Vision Transformer

no code implementations23 Dec 2020 Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, Zhaohui Yang, Yiman Zhang, DaCheng Tao

Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism.

Image Classification Inductive Bias +1

SCOP: Scientific Control for Reliable Neural Network Pruning

4 code implementations NeurIPS 2020 Yehui Tang, Yunhe Wang, Yixing Xu, DaCheng Tao, Chunjing Xu, Chao Xu, Chang Xu

To increase the reliability of the results, we prefer to have a more rigorous research design by including a scientific control group as an essential part to minimize the effect of all factors except the association between the filter and expected network output.

Network Pruning

A Semi-Supervised Assessor of Neural Architectures

no code implementations CVPR 2020 Yehui Tang, Yunhe Wang, Yixing Xu, Hanting Chen, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu

A graph convolutional neural network is introduced to predict the performance of architectures based on the learned representations and their relation modeled by the graph.

Neural Architecture Search

Beyond Dropout: Feature Map Distortion to Regularize Deep Neural Networks

2 code implementations23 Feb 2020 Yehui Tang, Yunhe Wang, Yixing Xu, Boxin Shi, Chao Xu, Chunjing Xu, Chang Xu

On one hand, massive trainable parameters significantly enhance the performance of these deep networks.

ReNAS:Relativistic Evaluation of Neural Architecture Search

3 code implementations30 Sep 2019 Yixing Xu, Yunhe Wang, Kai Han, Yehui Tang, Shangling Jui, Chunjing Xu, Chang Xu

An effective and efficient architecture performance evaluation scheme is essential for the success of Neural Architecture Search (NAS).

Neural Architecture Search

Bringing Giant Neural Networks Down to Earth with Unlabeled Data

no code implementations13 Jul 2019 Yehui Tang, Shan You, Chang Xu, Boxin Shi, Chao Xu

Specifically, we exploit the unlabeled data to mimic the classification characteristics of giant networks, so that the original capacity can be preserved nicely.

Cannot find the paper you are looking for? You can Submit a new open access paper.