no code implementations • ICML 2020 • Yanxi Li, Minjing Dong, Yunhe Wang, Chang Xu
This paper searches for the optimal neural architecture by minimizing a proxy of validation loss.
2 code implementations • 20 Sep 2023 • Chengcheng Wang, wei he, Ying Nie, Jianyuan Guo, Chuanjian Liu, Kai Han, Yunhe Wang
In the past years, YOLO-series models have emerged as the leading approaches in the area of real-time object detection.
Ranked #3 on
Object Detection
on COCO 2017 val
no code implementations • 10 Aug 2023 • Quan Tang, Chuanjian Liu, Fagui Liu, Yifan Liu, Jun Jiang, BoWen Zhang, Kai Han, Yunhe Wang
Aggregation of multi-stage features has been revealed to play a significant role in semantic segmentation.
2 code implementations • 24 Jul 2023 • Dehua Zheng, Wenhui Dong, Hailin Hu, Xinghao Chen, Yunhe Wang
DETR-like models have significantly boosted the performance of detectors and even outperformed classical convolutional models.
no code implementations • 26 Jun 2023 • Kai Han, Yunhe Wang, Jianyuan Guo, Enhua Wu
The proposed ParameterNet scheme enables low-FLOPs networks to benefit from large-scale visual pretraining.
2 code implementations • 14 Jun 2023 • Mingjian Zhu, Hanting Chen, Qiangyu Yan, Xudong Huang, GuanYu Lin, Wei Li, Zhijun Tu, Hailin Hu, Jie Hu, Yunhe Wang
The aforementioned advantages allow the detectors trained on GenImage to undergo a thorough evaluation and demonstrate strong applicability to diverse images.
1 code implementation • 1 Jun 2023 • Ning Ding, Yehui Tang, Zhongqian Fu, Chao Xu, Kai Han, Yunhe Wang
We present a new learning paradigm in which the knowledge extracted from large pre-trained models are utilized to help models like CNN and ViT learn enhanced representations and achieve better performance.
3 code implementations • 29 May 2023 • Yuchuan Tian, Hanting Chen, Xutao Wang, Zheyuan Bai, Qinghua Zhang, Ruifeng Li, Chao Xu, Yunhe Wang
In this PU context, we propose the length-sensitive Multiscale PU Loss, where we use a recurrent model in abstraction to estimate positive priors of scale-variant corpuses.
1 code implementation • 25 May 2023 • Zhiwei Hao, Jianyuan Guo, Kai Han, Han Hu, Chang Xu, Yunhe Wang
The tremendous success of large models trained on extensive datasets demonstrates that scale is a key ingredient in achieving superior results.
3 code implementations • 22 May 2023 • Hanting Chen, Yunhe Wang, Jianyuan Guo, DaCheng Tao
In this study, we introduce VanillaNet, a neural network architecture that embraces elegance in design.
1 code implementation • CVPR 2023 • Haoqing Wang, Yehui Tang, Yunhe Wang, Jianyuan Guo, Zhi-Hong Deng, Kai Han
The lower layers are not explicitly guided and the interaction among their patches is only used for calculating new activations.
1 code implementation • CVPR 2023 • Ning Ding, Yehui Tang, Kai Han, Chao Xu, Yunhe Wang
Recently, the sizes of deep neural networks and training datasets both increase drastically to pursue better performance in a practical sense.
1 code implementation • CVPR 2023 • Zhijun Tu, Jie Hu, Hanting Chen, Yunhe Wang
In this paper, we study post-training quantization(PTQ) for image super resolution using only a few unlabeled calibration images.
no code implementations • CVPR 2023 • Xudong Huang, Wei Li, Jie Hu, Hanting Chen, Yunhe Wang
We present Reference-guided Super-Resolution Neural Radiance Field (RefSR-NeRF) that extends NeRF to super resolution and photorealistic novel view synthesis.
2 code implementations • 29 Dec 2022 • Yixing Xu, Xinghao Chen, Yunhe Wang
This paper studies the problem of designing compact binary architectures for vision multi-layer perceptrons (MLPs).
no code implementations • 20 Dec 2022 • Ying Nie, Kai Han, Haikang Diao, Chuanjian Liu, Enhua Wu, Yunhe Wang
To this end, we first thoroughly analyze the difference on distributions of weights and activations in AdderNet and then propose a new quantization algorithm by redistributing the weights and the activations.
1 code implementation • 13 Dec 2022 • Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Yunhe Wang, Chang Xu
This paper presents FastMIM, a simple and generic framework for expediting masked image modeling with the following two steps: (i) pre-training vision backbones with low-resolution input images; and (ii) reconstructing Histograms of Oriented Gradients (HOG) feature instead of original RGB values of the input images.
11 code implementations • 23 Nov 2022 • Yehui Tang, Kai Han, Jianyuan Guo, Chang Xu, Chao Xu, Yunhe Wang
The convolutional operation can only capture local information in a window region, which prevents performance from being further improved.
2 code implementations • NeurIPS 2022 • Yuqiao Liu, Yehui Tang ~Yehui_Tang1, Zeqiong Lv, Yunhe Wang, Yanan sun
To solve this issue, we propose a Cross-Domain Predictor (CDP), which is trained based on the existing NAS benchmark datasets (e. g., NAS-Bench-101), but can be used to find high-performance architectures in large-scale search spaces.
3 code implementations • 17 Aug 2022 • Zhijun Tu, Xinghao Chen, Pengju Ren, Yunhe Wang
Since the modern deep neural networks are of sophisticated design with complex architecture for the accuracy reason, the diversity on distributions of weights and activations is very high.
1 code implementation • International Conference on Machine Learning 2022 • Yanxi Li, Xinghao Chen, Minjing Dong, Yehui Tang, Yunhe Wang, Chang Xu
Recently, neural architectures with all Multi-layer Perceptrons (MLPs) have attracted great research interest from the computer vision community.
Ranked #450 on
Image Classification
on ImageNet
1 code implementation • Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2022 • Chuanjian Liu, Kai Han, An Xiao, Ying Nie, Wei zhang, Yunhe Wang
In particular, the proposed method is used to enlarge models sourced by GhostNet, we achieve state-of-the-art 80. 9% and 84. 3% ImageNet top-1 accuracies under the setting of 600M and 4. 4B MACs, respectively.
8 code implementations • 1 Jun 2022 • Kai Han, Yunhe Wang, Jianyuan Guo, Yehui Tang, Enhua Wu
In this paper, we propose to represent the image as a graph structure and introduce a new Vision GNN (ViG) architecture to extract graph-level feature for visual tasks.
Ranked #334 on
Image Classification
on ImageNet
no code implementations • CVPR 2022 • Ning Ding, Yixing Xu, Yehui Tang, Chao Xu, Yunhe Wang, DaCheng Tao
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
7 code implementations • CVPR 2022 • Yikai Wang, Xinghao Chen, Lele Cao, Wenbing Huang, Fuchun Sun, Yunhe Wang
Many adaptations of transformers have emerged to address the single-modal vision tasks, where self-attention modules are stacked to handle input sources like images.
Ranked #1 on
Semantic Segmentation
on SUN-RGBD
no code implementations • 28 Mar 2022 • Min Zhong, Xinghao Chen, Xiaokang Chen, Gang Zeng, Yunhe Wang
For instance, our approach achieves a 66. 4\% mAP with the 0. 5 IoU threshold on the ScanNetV2 test set, which is 1. 9\% higher than the state-of-the-art method.
Ranked #4 on
3D Instance Segmentation
on S3DIS
2 code implementations • CVPR 2022 • Wenshuo Li, Hanting Chen, Jianyuan Guo, Ziyang Zhang, Yunhe Wang
However, due to the simplicity of their structures, the performance highly depends on the local features communication machenism.
1 code implementation • 27 Jan 2022 • Weijun Hong, Guilin Li, Weinan Zhang, Ruiming Tang, Yunhe Wang, Zhenguo Li, Yong Yu
Neural architecture search (NAS) has shown encouraging results in automating the architecture design.
4 code implementations • 10 Jan 2022 • Kai Han, Yunhe Wang, Chang Xu, Jianyuan Guo, Chunjing Xu, Enhua Wu, Qi Tian
The proposed C-Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks.
1 code implementation • 4 Jan 2022 • Kai Han, Jianyuan Guo, Yehui Tang, Yunhe Wang
We hope this new baseline will be helpful to the further research and application of vision transformer.
3 code implementations • CVPR 2022 • Zhenhua Liu, Yunhe Wang, Kai Han, Siwei Ma, Wen Gao
However, natural images are of huge diversity with abundant content and using such a universal quantization configuration for all samples is not an optimal strategy.
no code implementations • NeurIPS 2021 • Xinghao Chen, Chang Xu, Minjing Dong, Chunjing Xu, Yunhe Wang
Adder neural networks (AdderNets) have shown impressive performance on image classification with only addition operations, which are more energy efficient than traditional convolutional neural networks built with multiplications.
1 code implementation • NeurIPS 2021 • Han Shu, Jiahao Wang, Hanting Chen, Lin Li, Yujiu Yang, Yunhe Wang
With the new operation, vision transformers constructed using additions can also provide powerful feature representations.
no code implementations • NeurIPS 2021 • Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu
Adder neural networks (ANNs) are designed for low energy cost which replace expensive multiplications in convolutional neural networks (CNNs) with cheaper additions to yield energy-efficient neural networks and hardware accelerations.
no code implementations • NeurIPS 2021 • Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu
Adder neural network (AdderNet) replaces the original convolutions with massive multiplications by cheap additions while achieving comparable performance thus yields a series of energy-efficient neural networks.
8 code implementations • CVPR 2022 • Yehui Tang, Kai Han, Jianyuan Guo, Chang Xu, Yanxi Li, Chao Xu, Yunhe Wang
To dynamically aggregate tokens, we propose to represent each token as a wave function with two parts, amplitude and phase.
1 code implementation • 27 Oct 2021 • Xubin Wang, Yunhe Wang, Ka-Chun Wong, Xiangtao Li
We demonstrate the effectiveness of our algorithm on twelve large-scale datasets.
no code implementations • 29 Sep 2021 • Lin Xinyang, Hanting Chen, Yixing Xu, Chao Xu, Xiaolin Gui, Yiping Deng, Yunhe Wang
We study the problem of learning from positive and unlabeled (PU) data in the federated setting, where each client only labels a little part of their dataset due to the limitation of resources and time.
no code implementations • 20 Sep 2021 • Kai Han, Yunhe Wang, Chang Xu, Chunjing Xu, Enhua Wu, DaCheng Tao
A series of secondary filters can be derived from a primary filter with the help of binary masks.
7 code implementations • CVPR 2022 • Jianyuan Guo, Yehui Tang, Kai Han, Xinghao Chen, Han Wu, Chao Xu, Chang Xu, Yunhe Wang
Previous vision MLPs such as MLP-Mixer and ResMLP accept linearly flattened image patches as input, making them inflexible for different input sizes and hard to capture spatial information.
no code implementations • NeurIPS 2021 • Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu
With the tremendous advances in the architecture and scale of convolutional neural networks (CNNs) over the past few decades, they can easily reach or even exceed the performance of humans in certain tasks.
1 code implementation • 31 Jul 2021 • Chuanjian Liu, Kai Han, An Xiao, Yiping Deng, Wei zhang, Chunjing Xu, Yunhe Wang
Recent studies on deep convolutional neural networks present a simple paradigm of architecture design, i. e., models with more MACs typically achieve better accuracy, such as EfficientNet and RegNet.
11 code implementations • CVPR 2022 • Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Xinghao Chen, Yunhe Wang, Chang Xu
Vision transformers have been successfully applied to image recognition tasks due to their ability to capture long-range dependencies within an image.
1 code implementation • 3 Jul 2021 • Zhiwei Hao, Jianyuan Guo, Ding Jia, Kai Han, Yehui Tang, Chao Zhang, Han Hu, Yunhe Wang
Specifically, we train a tiny student model to match a pre-trained teacher model in the patch-level manifold space.
4 code implementations • NeurIPS 2021 • Yehui Tang, Kai Han, Chang Xu, An Xiao, Yiping Deng, Chao Xu, Yunhe Wang
Transformer models have achieved great progress on computer vision tasks recently.
no code implementations • NeurIPS 2021 • Zhenhua Liu, Yunhe Wang, Kai Han, Siwei Ma, Wen Gao
Recently, transformer has achieved remarkable performance on a variety of computer vision applications.
1 code implementation • 21 Jun 2021 • Xinyang Lin, Hanting Chen, Yixing Xu, Chao Xu, Xiaolin Gui, Yiping Deng, Yunhe Wang
We study the problem of learning from positive and unlabeled (PU) data in the federated setting, where each client only labels a little part of their dataset due to the limitation of resources and time.
no code implementations • CVPR 2021 • Yiman Zhang, Hanting Chen, Xinghao Chen, Yiping Deng, Chunjing Xu, Yunhe Wang
Experiments on various datasets and architectures demonstrate that the proposed method is able to be utilized for effectively learning portable student networks without the original data, e. g., with 0. 16dB PSNR drop on Set5 for x2 super resolution.
no code implementations • CVPR 2021 • Jianyuan Guo, Kai Han, Han Wu, Chao Zhang, Xinghao Chen, Chunjing Xu, Chang Xu, Yunhe Wang
In this paper, we present a positive-unlabeled learning based scheme to expand training data by purifying valuable images from massive unlabeled ones, where the original training data are viewed as positive data and the unlabeled images in the wild are unlabeled data.
3 code implementations • CVPR 2021 • Yixing Xu, Yunhe Wang, Kai Han, Yehui Tang, Shangling Jui, Chunjing Xu, Chang Xu
An effective and efficient architecture performance evaluation scheme is essential for the success of Neural Architecture Search (NAS).
1 code implementation • CVPR 2021 • Hanting Chen, Tianyu Guo, Chang Xu, Wenshuo Li, Chunjing Xu, Chao Xu, Yunhe Wang
Experiments on various datasets demonstrate that the student networks learned by the proposed method can achieve comparable performance with those using the original dataset.
3 code implementations • NeurIPS 2021 • Mingjian Zhu, Kai Han, Enhua Wu, Qiulin Zhang, Ying Nie, Zhenzhong Lan, Yunhe Wang
To this end, we propose a novel dynamic-resolution network (DRNet) in which the input resolution is determined dynamically based on each input sample.
no code implementations • CVPR 2022 • Yehui Tang, Kai Han, Yunhe Wang, Chang Xu, Jianyuan Guo, Chao Xu, DaCheng Tao
We first identify the effective patches in the last layer and then use them to guide the patch selection process of previous layers.
no code implementations • 29 May 2021 • Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Chunjing Xu, Tong Zhang
The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values.
no code implementations • 12 May 2021 • Wenshuo Li, Hanting Chen, Mingqiang Huang, Xinghao Chen, Chunjing Xu, Yunhe Wang
Adder neural network (AdderNet) is a new kind of deep model that replaces the original massive multiplications in convolutions by additions while preserving the high performance.
no code implementations • 19 Apr 2021 • Jiahao Wang, Han Shu, Weihao Xia, Yujiu Yang, Yunhe Wang
This paper studies the neural architecture search (NAS) problem for developing efficient generator networks.
1 code implementation • CVPR 2021 • Jianyuan Guo, Kai Han, Yunhe Wang, Han Wu, Xinghao Chen, Chunjing Xu, Chang Xu
To this end, we present a novel distillation algorithm via decoupled features (DeFeat) for learning a better student detector.
no code implementations • ICCV 2021 • Wenbin Xie, Dehua Song, Chang Xu, Chunjing Xu, HUI ZHANG, Yunhe Wang
Extensive experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures to obtain the better tradeoff between visual quality and computational complexity.
4 code implementations • CVPR 2021 • Yehui Tang, Yunhe Wang, Yixing Xu, Yiping Deng, Chao Xu, DaCheng Tao, Chang Xu
Then, the manifold relationship between instances and the pruned sub-networks will be aligned in the training procedure.
1 code implementation • NeurIPS 2021 • Yixing Xu, Kai Han, Chang Xu, Yehui Tang, Chunjing Xu, Yunhe Wang
Binary neural networks (BNNs) represent original full-precision weights and activations into 1-bit with sign function.
11 code implementations • NeurIPS 2021 • Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, Yunhe Wang
In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT).
no code implementations • 25 Jan 2021 • Yunhe Wang, Mingqiang Huang, Kai Han, Hanting Chen, Wei zhang, Chunjing Xu, DaCheng Tao
With a comprehensive comparison on the performance, power consumption, hardware resource consumption and network generalization capability, we conclude the AdderNet is able to surpass all the other competitors including the classical CNN, novel memristor-network, XNOR-Net and the shift-kernel based network, indicating its great potential in future high performance and energy-efficient artificial intelligence applications.
1 code implementation • 21 Jan 2021 • Ying Nie, Kai Han, Zhenhua Liu, Chuanjian Liu, Yunhe Wang
Based on the observation that many features in SISR models are also similar to each other, we propose to use shift operation to generate the redundant features (i. e., ghost features).
no code implementations • 23 Dec 2020 • Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, Zhaohui Yang, Yiman Zhang, DaCheng Tao
Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism.
1 code implementation • NeurIPS 2020 • Guilin Li, Junlei Zhang, Yunhe Wang, Chuanjian Liu, Matthias Tan, Yunfeng Lin, Wei zhang, Jiashi Feng, Tong Zhang
In particular, we propose a novel joint-training framework to train plain CNN by leveraging the gradients of the ResNet counterpart.
1 code implementation • 1 Dec 2020 • Mingjian Zhu, Kai Han, Changbin Yu, Yunhe Wang
An attempt to enhance the FPN is enriching the spatial information by expanding the receptive fields, which is promising to largely improve the detection accuracy.
2 code implementations • NeurIPS 2020 • Kai Han, Yunhe Wang, Qiulin Zhang, Wei zhang, Chunjing Xu, Tong Zhang
To this end, we summarize a tiny formula for downsizing neural architectures through a series of smaller models derived from the EfficientNet-B0 with the FLOPs constraint.
5 code implementations • CVPR 2021 • Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao
To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs.
Ranked #1 on
Single Image Deraining
on Rain100L
(using extra training data)
1 code implementation • NeurIPS 2020 • Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu
The power of deep neural networks is to be unleashed for analyzing a large volume of data (e. g. ImageNet), but the architecture search is often executed on another smaller dataset (e. g. CIFAR-10) to finish it in a feasible time.
1 code implementation • 3 Nov 2020 • Bochao Wang, Hang Xu, Jiajin Zhang, Chen Chen, Xiaozhi Fang, Yixing Xu, Ning Kang, Lanqing Hong, Chenhan Jiang, Xinyue Cai, Jiawei Li, Fengwei Zhou, Yong Li, Zhicheng Liu, Xinghao Chen, Kai Han, Han Shu, Dehua Song, Yunhe Wang, Wei zhang, Chunjing Xu, Zhenguo Li, Wenzhi Liu, Tong Zhang
Automated Machine Learning (AutoML) is an important industrial solution for automatic discovery and deployment of the machine learning models.
9 code implementations • 28 Oct 2020 • Kai Han, Yunhe Wang, Qiulin Zhang, Wei zhang, Chunjing Xu, Tong Zhang
To this end, we summarize a tiny formula for downsizing neural architectures through a series of smaller models derived from the EfficientNet-B0 with the FLOPs constraint.
Ranked #644 on
Image Classification
on ImageNet
4 code implementations • NeurIPS 2020 • Yehui Tang, Yunhe Wang, Yixing Xu, DaCheng Tao, Chunjing Xu, Chao Xu, Chang Xu
To increase the reliability of the results, we prefer to have a more rigorous research design by including a scientific control group as an essential part to minimize the effect of all factors except the association between the filter and expected network output.
1 code implementation • ICML 2020 • Kai Han, Yunhe Wang, Yixing Xu, Chunjing Xu, Enhua Wu, Chang Xu
This paper formalizes the binarization operations over neural networks from a learning perspective.
no code implementations • NeurIPS 2020 • Yixing Xu, Chang Xu, Xinghao Chen, Wei zhang, Chunjing Xu, Yunhe Wang
A convolutional neural network (CNN) with the same architecture is simultaneously initialized and trained as a teacher network, features and weights of ANN and CNN will be transformed to a new space to eliminate the accuracy drop.
no code implementations • CVPR 2021 • Dehua Song, Yunhe Wang, Hanting Chen, Chang Xu, Chunjing Xu, DaCheng Tao
To this end, we thoroughly analyze the relationship between an adder operation and the identity mapping and insert shortcuts to enhance the performance of SR models using adder networks.
1 code implementation • NeurIPS 2020 • Zhaohui Yang, Yunhe Wang, Kai Han, Chunjing Xu, Chao Xu, DaCheng Tao, Chang Xu
Quantized neural networks with low-bit weights and activations are attractive for developing AI accelerators.
no code implementations • 2 Sep 2020 • Minjing Dong, Yanxi Li, Yunhe Wang, Chang Xu
We explore the relationship among adversarial robustness, Lipschitz constant, and architecture parameters and show that an appropriate constraint on architecture parameters could reduce the Lipschitz constant to further improve the robustness.
no code implementations • 16 Jul 2020 • Xinghao Chen, Yiman Zhang, Yunhe Wang
To identify the redundancy in segmentation networks, we present a multi-task channel pruning approach.
no code implementations • ECCV 2020 • Xinghao Chen, Yiman Zhang, Yunhe Wang, Han Shu, Chunjing Xu, Chang Xu
This paper proposes to learn a lightweight video style transfer network via knowledge distillation paradigm.
3 code implementations • CVPR 2021 • Zhaohui Yang, Yunhe Wang, Xinghao Chen, Jianyuan Guo, Wei zhang, Chao Xu, Chunjing Xu, DaCheng Tao, Chang Xu
To achieve an extremely fast NAS while preserving the high accuracy, we propose to identify the vital blocks and make them the priority in the architecture search.
no code implementations • 29 May 2020 • Yunhe Wang, Yixing Xu, DaCheng Tao
Neural architecture searching is a way of automatically exploring optimal deep neural networks in a given huge search space.
no code implementations • CVPR 2020 • Yehui Tang, Yunhe Wang, Yixing Xu, Hanting Chen, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu
A graph convolutional neural network is introduced to predict the performance of architectures based on the learned representations and their relation modeled by the graph.
1 code implementation • CVPR 2020 • Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhaohui Yang, Han Wu, Xinghao Chen, Chang Xu
To this end, we propose a hierarchical trinity search framework to simultaneously discover efficient architectures for all components (i. e. backbone, neck, and head) of object detector in an end-to-end manner.
no code implementations • 7 Mar 2020 • Hanting Chen, Yunhe Wang, Han Shu, Changyuan Wen, Chunjing Xu, Boxin Shi, Chao Xu, Chang Xu
To promote the capability of student generator, we include a student discriminator to measure the distances between real images, and images generated by student and teacher generators.
no code implementations • 26 Feb 2020 • Han Shu, Yunhe Wang
Moreover, we transplant the searched network architecture to other datasets which are not involved in the architecture searching procedure.
2 code implementations • 23 Feb 2020 • Yehui Tang, Yunhe Wang, Yixing Xu, Boxin Shi, Chao Xu, Chunjing Xu, Chang Xu
On one hand, massive trainable parameters significantly enhance the performance of these deep networks.
no code implementations • 17 Feb 2020 • Zhaohui Yang, Yunhe Wang, Chang Xu, Peng Du, Chao Xu, Chunjing Xu, Qi Tian
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
1 code implementation • CVPR 2020 • Tianyu Guo, Chang Xu, Jiajun Huang, Yunhe Wang, Boxin Shi, Chao Xu, DaCheng Tao
In contrast, it is more reasonable to treat the generated data as unlabeled, which could be positive or negative according to their quality.
no code implementations • 3 Feb 2020 • Chuanjian Liu, Kai Han, Yunhe Wang, Hanting Chen, Qi Tian, Chunjing Xu
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
3 code implementations • CVPR 2020 • Hanting Chen, Yunhe Wang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu
The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values.
27 code implementations • CVPR 2020 • Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu
Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources.
Ranked #812 on
Image Classification
on ImageNet
3 code implementations • 30 Sep 2019 • Yixing Xu, Yunhe Wang, Kai Han, Yehui Tang, Shangling Jui, Chunjing Xu, Chang Xu
An effective and efficient architecture performance evaluation scheme is essential for the success of Neural Architecture Search (NAS).
1 code implementation • 25 Sep 2019 • Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, Yunhe Wang
Focusing on this issue, we propose an efficient residual dense block search algorithm with multiple objectives to hunt for fast, lightweight and accurate networks for image super-resolution.
2 code implementations • NeurIPS 2019 • Yixing Xu, Yunhe Wang, Hanting Chen, Kai Han, Chunjing Xu, DaCheng Tao, Chang Xu
In practice, only a small portion of the original training set is required as positive examples and more useful training examples can be obtained from the massive unlabeled data on the cloud through a PU classifier with an attention based multi-scale feature extractor.
no code implementations • 16 Sep 2019 • Mingzhu Shen, Kai Han, Chunjing Xu, Yunhe Wang
Binary neural networks have attracted tremendous attention due to the efficiency for deploying them on mobile devices.
1 code implementation • CVPR 2020 • Zhaohui Yang, Yunhe Wang, Xinghao Chen, Boxin Shi, Chao Xu, Chunjing Xu, Qi Tian, Chang Xu
Architectures in the population that share parameters within one SuperNet in the latest generation will be tuned over the training dataset with a few epochs.
1 code implementation • 6 Aug 2019 • Kai Han, Yunhe Wang, Yixing Xu, Chunjing Xu, DaCheng Tao, Chang Xu
Existing works used to decrease the number or size of requested convolution filters for a minimum viable CNN on edge devices.
no code implementations • 27 Jul 2019 • Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu
Exploring deep convolutional neural networks of high efficiency and low memory usage is very essential for a wide variety of machine learning tasks.
no code implementations • 27 Jul 2019 • Kai Han, Yunhe Wang, Han Shu, Chuanjian Liu, Chunjing Xu, Chang Xu
This paper expands the strength of deep convolutional neural networks (CNNs) to the pedestrian attribute recognition problem by devising a novel attribute aware pooling algorithm.
2 code implementations • ICCV 2019 • Han Shu, Yunhe Wang, Xu Jia, Kai Han, Hanting Chen, Chunjing Xu, Qi Tian, Chang Xu
Generative adversarial networks (GANs) have been successfully used for considerable computer vision tasks, especially the image-to-image translation.
3 code implementations • ICCV 2019 • Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, Qi Tian
Learning portable neural networks is very essential for computer vision for the purpose that pre-trained heavy deep models can be well applied on edge devices such as mobile phones and micro sensors.
no code implementations • 17 Dec 2018 • Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, DaCheng Tao
Experiments on benchmark datasets and well-trained networks suggest that the proposed algorithm is superior to state-of-the-art teacher-student learning methods in terms of computational and storage complexity.
no code implementations • NeurIPS 2018 • Yunhe Wang, Chang Xu, Chunjing Xu, Chao Xu, DaCheng Tao
A series of secondary filters can be derived from a primary filter.
1 code implementation • 23 Oct 2017 • Kai Han, Yunhe Wang, Chao Zhang, Chao Li, Chao Xu
High-dimensional data in many areas such as computer vision and machine learning tasks brings in computational and analytical difficulty.
1 code implementation • ICML 2017 • Yunhe Wang, Chang Xu, Chao Xu, DaCheng Tao
The filter is then re-configured to establish the mapping from original input to the new compact feature map, and the resulting network can preserve intrinsic information of the original network with significantly fewer parameters, which not only decreases the online memory for launching CNN but also accelerates the computation speed.
no code implementations • 25 Jul 2017 • Yunhe Wang, Chang Xu, Jiayan Qiu, Chao Xu, DaCheng Tao
In contrast to directly recognizing subtle weights or filters as redundant in a given CNN, this paper presents an evolutionary method to automatically eliminate redundant convolution filters.
no code implementations • 25 Jan 2017 • Shan You, Chang Xu, Yunhe Wang, Chao Xu, DaCheng Tao
This paper presents privileged multi-label learning (PrML) to explore and exploit the relationship between labels in multi-label learning problems.
no code implementations • NeurIPS 2016 • Yunhe Wang, Chang Xu, Shan You, DaCheng Tao, Chao Xu
Deep convolutional neural networks (CNNs) are successfully used in a number of applications.
no code implementations • 19 Apr 2016 • Shan You, Chang Xu, Yunhe Wang, Chao Xu, DaCheng Tao
The core of SLL is to explore and exploit the relationships between new labels and past labels and then inherit the relationship into hypotheses of labels to boost the performance of new classifiers.
no code implementations • 19 Apr 2016 • Yunhe Wang, Chang Xu, Shan You, DaCheng Tao, Chao Xu
Here we study the extreme visual recovery problem, in which over 90\% of pixel values in a given image are missing.