Search Results for author: Chunjing Xu

Found 64 papers, 28 papers with code

Task-Customized Self-Supervised Pre-training with Scalable Dynamic Routing

no code implementations26 May 2022 Zhili Liu, Jianhua Han, Lanqing Hong, Hang Xu, Kai Chen, Chunjing Xu, Zhenguo Li

On the other hand, for existing SSL methods, it is burdensome and infeasible to use different downstream-task-customized datasets in pre-training for different tasks.

Self-Supervised Learning

ManiTrans: Entity-Level Text-Guided Image Manipulation via Token-wise Semantic Alignment and Generation

no code implementations CVPR 2022 Jianan Wang, Guansong Lu, Hang Xu, Zhenguo Li, Chunjing Xu, Yanwei Fu

Existing text-guided image manipulation methods aim to modify the appearance of the image or to edit a few objects in a virtual or simple scenario, which is far from practical application.

Image Generation Image Manipulation

Laneformer: Object-aware Row-Column Transformers for Lane Detection

no code implementations18 Mar 2022 Jianhua Han, Xiajun Deng, Xinyue Cai, Zhen Yang, Hang Xu, Chunjing Xu, Xiaodan Liang

We present Laneformer, a conceptually simple yet powerful transformer-based architecture tailored for lane detection that is a long-standing research topic for visual perception in autonomous driving.

Autonomous Driving Lane Detection

CODA: A Real-World Road Corner Case Dataset for Object Detection in Autonomous Driving

no code implementations15 Mar 2022 Kaican Li, Kai Chen, Haoyu Wang, Lanqing Hong, Chaoqiang Ye, Jianhua Han, Yukuai Chen, Wei zhang, Chunjing Xu, Dit-yan Yeung, Xiaodan Liang, Zhenguo Li, Hang Xu

One main reason that impedes the development of truly reliably self-driving systems is the lack of public datasets for evaluating the performance of object detectors on corner cases.

Autonomous Driving object-detection +1

Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark

no code implementations14 Feb 2022 Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Minzhe Niu, Xiaodan Liang, Lewei Yao, Runhui Huang, Wei zhang, Xin Jiang, Chunjing Xu, Hang Xu

Furthermore, we release a group of models pre-trained with various image encoders (ViT-B/ViT-L/SwinT) and also apply advanced pre-training techniques into VLP such as locked-image text tuning, token-wise similarity in contrastive learning, and reduced-token interaction.

Contrastive Learning Image Classification +1

GhostNets on Heterogeneous Devices via Cheap Operations

2 code implementations10 Jan 2022 Kai Han, Yunhe Wang, Chang Xu, Jianyuan Guo, Chunjing Xu, Enhua Wu, Qi Tian

The proposed C-Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks.

An Empirical Study of Adder Neural Networks for Object Detection

no code implementations NeurIPS 2021 Xinghao Chen, Chang Xu, Minjing Dong, Chunjing Xu, Yunhe Wang

Adder neural networks (AdderNets) have shown impressive performance on image classification with only addition operations, which are more energy efficient than traditional convolutional neural networks built with multiplications.

Autonomous Driving Face Detection +3

FILIP: Fine-grained Interactive Language-Image Pre-Training

no code implementations ICLR 2022 Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, Chunjing Xu

In this paper, we introduce a large-scale Fine-grained Interactive Language-Image Pre-training (FILIP) to achieve finer-level alignment through a cross-modal late interaction mechanism, which uses a token-wise maximum similarity between visual and textual tokens to guide the contrastive objective.

Image Classification Zero-Shot Image Classification

SOFT: Softmax-free Transformer with Linear Complexity

no code implementations NeurIPS 2021 Jiachen Lu, Jinghan Yao, Junge Zhang, Xiatian Zhu, Hang Xu, Weiguo Gao, Chunjing Xu, Tao Xiang, Li Zhang

Crucially, with a linear complexity, much longer token sequences are permitted in SOFT, resulting in superior trade-off between accuracy and complexity.

Natural Language Processing

Learning Versatile Convolution Filters for Efficient Visual Recognition

no code implementations20 Sep 2021 Kai Han, Yunhe Wang, Chang Xu, Chunjing Xu, Enhua Wu, DaCheng Tao

A series of secondary filters can be derived from a primary filter with the help of binary masks.

Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

1 code implementation ICCV 2021 Jiageng Mao, Minzhe Niu, Haoyue Bai, Xiaodan Liang, Hang Xu, Chunjing Xu

To resolve the problems, we propose a novel second-stage module, named pyramid RoI head, to adaptively learn the features from the sparse points of interest.

3D Object Detection object-detection

Voxel Transformer for 3D Object Detection

1 code implementation ICCV 2021 Jiageng Mao, Yujing Xue, Minzhe Niu, Haoyue Bai, Jiashi Feng, Xiaodan Liang, Hang Xu, Chunjing Xu

We present Voxel Transformer (VoTr), a novel and effective voxel-based Transformer backbone for 3D object detection from point clouds.

Ranked #2 on 3D Object Detection on waymo vehicle (L1 mAP metric)

3D Object Detection object-detection +1

Greedy Network Enlarging

no code implementations31 Jul 2021 Chuanjian Liu, Kai Han, An Xiao, Yiping Deng, Wei zhang, Chunjing Xu, Yunhe Wang

Recent studies on deep convolutional neural networks present a simple paradigm of architecture design, i. e., models with more MACs typically achieve better accuracy, such as EfficientNet and RegNet.

$S^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks

no code implementations NeurIPS 2021 Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia

Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy efficient compared to conventional neural networks.

One Million Scenes for Autonomous Driving: ONCE Dataset

1 code implementation21 Jun 2021 Jiageng Mao, Minzhe Niu, Chenhan Jiang, Hanxue Liang, Jingheng Chen, Xiaodan Liang, Yamin Li, Chaoqiang Ye, Wei zhang, Zhenguo Li, Jie Yu, Hang Xu, Chunjing Xu

To facilitate future research on exploiting unlabeled data for 3D detection, we additionally provide a benchmark in which we reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.

3D Object Detection Autonomous Driving +1

SODA10M: A Large-Scale 2D Self/Semi-Supervised Object Detection Dataset for Autonomous Driving

no code implementations21 Jun 2021 Jianhua Han, Xiwen Liang, Hang Xu, Kai Chen, Lanqing Hong, Jiageng Mao, Chaoqiang Ye, Wei zhang, Zhenguo Li, Xiaodan Liang, Chunjing Xu

Experiments show that SODA10M can serve as a promising pre-training dataset for different self-supervised learning methods, which gives superior performance when fine-tuning with different downstream tasks (i. e., detection, semantic/instance segmentation) in autonomous driving domain.

Autonomous Driving Instance Segmentation +5

Positive-Unlabeled Data Purification in the Wild for Object Detection

no code implementations CVPR 2021 Jianyuan Guo, Kai Han, Han Wu, Chao Zhang, Xinghao Chen, Chunjing Xu, Chang Xu, Yunhe Wang

In this paper, we present a positive-unlabeled learning based scheme to expand training data by purifying valuable images from massive unlabeled ones, where the original training data are viewed as positive data and the unlabeled images in the wild are unlabeled data.

Knowledge Distillation object-detection +1

Learning Student Networks in the Wild

1 code implementation CVPR 2021 Hanting Chen, Tianyu Guo, Chang Xu, Wenshuo Li, Chunjing Xu, Chao Xu, Yunhe Wang

Experiments on various datasets demonstrate that the student networks learned by the proposed method can achieve comparable performance with those using the original dataset.

Knowledge Distillation Model Compression

ReNAS: Relativistic Evaluation of Neural Architecture Search

2 code implementations CVPR 2021 Yixing Xu, Yunhe Wang, Kai Han, Yehui Tang, Shangling Jui, Chunjing Xu, Chang Xu

An effective and efficient architecture performance evaluation scheme is essential for the success of Neural Architecture Search (NAS).

Neural Architecture Search

Data-Free Knowledge Distillation for Image Super-Resolution

no code implementations CVPR 2021 Yiman Zhang, Hanting Chen, Xinghao Chen, Yiping Deng, Chunjing Xu, Yunhe Wang

Experiments on various datasets and architectures demonstrate that the proposed method is able to be utilized for effectively learning portable student networks without the original data, e. g., with 0. 16dB PSNR drop on Set5 for x2 super resolution.

Image Super-Resolution Knowledge Distillation +2

Universal Adder Neural Networks

no code implementations29 May 2021 Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Chunjing Xu, Tong Zhang

The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values.

S$^3$: Sign-Sparse-Shift Reparametrization for Effective Training of Low-bit Shift Networks

no code implementations NeurIPS 2021 Xinlin Li, Bang Liu, YaoLiang Yu, Wulong Liu, Chunjing Xu, Vahid Partovi Nia

Shift neural networks reduce computation complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values, which are fast and energy-efficient compared to conventional neural networks.

Winograd Algorithm for AdderNet

no code implementations12 May 2021 Wenshuo Li, Hanting Chen, Mingqiang Huang, Xinghao Chen, Chunjing Xu, Yunhe Wang

Adder neural network (AdderNet) is a new kind of deep model that replaces the original massive multiplications in convolutions by additions while preserving the high performance.

Distilling Object Detectors via Decoupled Features

1 code implementation CVPR 2021 Jianyuan Guo, Kai Han, Yunhe Wang, Han Wu, Xinghao Chen, Chunjing Xu, Chang Xu

To this end, we present a novel distillation algorithm via decoupled features (DeFeat) for learning a better student detector.

Knowledge Distillation object-detection +1

Learning Frequency-aware Dynamic Network for Efficient Super-Resolution

no code implementations ICCV 2021 Wenbin Xie, Dehua Song, Chang Xu, Chunjing Xu, HUI ZHANG, Yunhe Wang

Extensive experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures to obtain the better tradeoff between visual quality and computational complexity.

Image Super-Resolution Single Image Super Resolution

Learning Frequency Domain Approximation for Binary Neural Networks

1 code implementation NeurIPS 2021 Yixing Xu, Kai Han, Chang Xu, Yehui Tang, Chunjing Xu, Yunhe Wang

Binary neural networks (BNNs) represent original full-precision weights and activations into 1-bit with sign function.

Transformer in Transformer

10 code implementations NeurIPS 2021 Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, Yunhe Wang

In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT).

Fine-Grained Image Classification

AdderNet and its Minimalist Hardware Design for Energy-Efficient Artificial Intelligence

no code implementations25 Jan 2021 Yunhe Wang, Mingqiang Huang, Kai Han, Hanting Chen, Wei zhang, Chunjing Xu, DaCheng Tao

With a comprehensive comparison on the performance, power consumption, hardware resource consumption and network generalization capability, we conclude the AdderNet is able to surpass all the other competitors including the classical CNN, novel memristor-network, XNOR-Net and the shift-kernel based network, indicating its great potential in future high performance and energy-efficient artificial intelligence applications.

Quantization

GhostSR: Learning Ghost Features for Efficient Image Super-Resolution

no code implementations21 Jan 2021 Ying Nie, Kai Han, Zhenhua Liu, An Xiao, Yiping Deng, Chunjing Xu, Yunhe Wang

Based on the observation that many features in SISR models are also similar to each other, we propose to use shift operation to generate the redundant features (i. e., Ghost features).

Image Super-Resolution Single Image Super Resolution

A Survey on Vision Transformer

no code implementations23 Dec 2020 Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, Zhaohui Yang, Yiman Zhang, DaCheng Tao

Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism.

Image Classification Inductive Bias +1

Model Rubik’s Cube: Twisting Resolution, Depth and Width for TinyNets

2 code implementations NeurIPS 2020 Kai Han, Yunhe Wang, Qiulin Zhang, Wei zhang, Chunjing Xu, Tong Zhang

To this end, we summarize a tiny formula for downsizing neural architectures through a series of smaller models derived from the EfficientNet-B0 with the FLOPs constraint.

Image Classification

Pre-Trained Image Processing Transformer

3 code implementations CVPR 2021 Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao

To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs.

 Ranked #1 on Single Image Deraining on Rain100L (using extra training data)

Color Image Denoising Contrastive Learning +2

Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets

6 code implementations28 Oct 2020 Kai Han, Yunhe Wang, Qiulin Zhang, Wei zhang, Chunjing Xu, Tong Zhang

To this end, we summarize a tiny formula for downsizing neural architectures through a series of smaller models derived from the EfficientNet-B0 with the FLOPs constraint.

Image Classification

SCOP: Scientific Control for Reliable Neural Network Pruning

4 code implementations NeurIPS 2020 Yehui Tang, Yunhe Wang, Yixing Xu, DaCheng Tao, Chunjing Xu, Chao Xu, Chang Xu

To increase the reliability of the results, we prefer to have a more rigorous research design by including a scientific control group as an essential part to minimize the effect of all factors except the association between the filter and expected network output.

Network Pruning

Kernel Based Progressive Distillation for Adder Neural Networks

no code implementations NeurIPS 2020 Yixing Xu, Chang Xu, Xinghao Chen, Wei zhang, Chunjing Xu, Yunhe Wang

A convolutional neural network (CNN) with the same architecture is simultaneously initialized and trained as a teacher network, features and weights of ANN and CNN will be transformed to a new space to eliminate the accuracy drop.

Knowledge Distillation

AdderSR: Towards Energy Efficient Image Super-Resolution

no code implementations CVPR 2021 Dehua Song, Yunhe Wang, Hanting Chen, Chang Xu, Chunjing Xu, DaCheng Tao

To this end, we thoroughly analyze the relationship between an adder operation and the identity mapping and insert shortcuts to enhance the performance of SR models using adder networks.

Image Classification Image Super-Resolution +1

HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens

1 code implementation CVPR 2021 Zhaohui Yang, Yunhe Wang, Xinghao Chen, Jianyuan Guo, Wei zhang, Chao Xu, Chunjing Xu, DaCheng Tao, Chang Xu

To achieve an extremely fast NAS while preserving the high accuracy, we propose to identify the vital blocks and make them the priority in the architecture search.

Neural Architecture Search

A Semi-Supervised Assessor of Neural Architectures

no code implementations CVPR 2020 Yehui Tang, Yunhe Wang, Yixing Xu, Hanting Chen, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu

A graph convolutional neural network is introduced to predict the performance of architectures based on the learned representations and their relation modeled by the graph.

Neural Architecture Search

Distilling portable Generative Adversarial Networks for Image Translation

no code implementations7 Mar 2020 Hanting Chen, Yunhe Wang, Han Shu, Changyuan Wen, Chunjing Xu, Boxin Shi, Chao Xu, Chang Xu

To promote the capability of student generator, we include a student discriminator to measure the distances between real images, and images generated by student and teacher generators.

Image-to-Image Translation Knowledge Distillation +1

Beyond Dropout: Feature Map Distortion to Regularize Deep Neural Networks

2 code implementations23 Feb 2020 Yehui Tang, Yunhe Wang, Yixing Xu, Boxin Shi, Chao Xu, Chunjing Xu, Chang Xu

On one hand, massive trainable parameters significantly enhance the performance of these deep networks.

Discernible Image Compression

no code implementations17 Feb 2020 Zhaohui Yang, Yunhe Wang, Chang Xu, Peng Du, Chao Xu, Chunjing Xu, Qi Tian

Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.

Image Compression object-detection +1

Widening and Squeezing: Towards Accurate and Efficient QNNs

no code implementations3 Feb 2020 Chuanjian Liu, Kai Han, Yunhe Wang, Hanting Chen, Qi Tian, Chunjing Xu

Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.

Quantization

AdderNet: Do We Really Need Multiplications in Deep Learning?

2 code implementations CVPR 2020 Hanting Chen, Yunhe Wang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu

The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values.

GhostNet: More Features from Cheap Operations

13 code implementations CVPR 2020 Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu

Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources.

Image Classification

Unsupervised Image Super-Resolution with an Indirect Supervised Path

no code implementations7 Oct 2019 Zhen Han, Enyan Dai, Xu Jia, Xiaoying Ren, Shuaijun Chen, Chunjing Xu, Jianzhuang Liu, Qi Tian

The task of single image super-resolution (SISR) aims at reconstructing a high-resolution (HR) image from a low-resolution (LR) image.

Image Super-Resolution Single Image Super Resolution +1

ReNAS:Relativistic Evaluation of Neural Architecture Search

3 code implementations30 Sep 2019 Yixing Xu, Yunhe Wang, Kai Han, Yehui Tang, Shangling Jui, Chunjing Xu, Chang Xu

An effective and efficient architecture performance evaluation scheme is essential for the success of Neural Architecture Search (NAS).

Neural Architecture Search

Efficient Residual Dense Block Search for Image Super-Resolution

1 code implementation25 Sep 2019 Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, Yunhe Wang

Focusing on this issue, we propose an efficient residual dense block search algorithm with multiple objectives to hunt for fast, lightweight and accurate networks for image super-resolution.

Image Super-Resolution Single Image Super Resolution

Positive-Unlabeled Compression on the Cloud

2 code implementations NeurIPS 2019 Yixing Xu, Yunhe Wang, Hanting Chen, Kai Han, Chunjing Xu, DaCheng Tao, Chang Xu

In practice, only a small portion of the original training set is required as positive examples and more useful training examples can be obtained from the massive unlabeled data on the cloud through a PU classifier with an attention based multi-scale feature extractor.

Knowledge Distillation

Searching for Accurate Binary Neural Architectures

no code implementations16 Sep 2019 Mingzhu Shen, Kai Han, Chunjing Xu, Yunhe Wang

Binary neural networks have attracted tremendous attention due to the efficiency for deploying them on mobile devices.

CARS: Continuous Evolution for Efficient Neural Architecture Search

1 code implementation CVPR 2020 Zhaohui Yang, Yunhe Wang, Xinghao Chen, Boxin Shi, Chao Xu, Chunjing Xu, Qi Tian, Chang Xu

Architectures in the population that share parameters within one SuperNet in the latest generation will be tuned over the training dataset with a few epochs.

Neural Architecture Search

Full-Stack Filters to Build Minimum Viable CNNs

1 code implementation6 Aug 2019 Kai Han, Yunhe Wang, Yixing Xu, Chunjing Xu, DaCheng Tao, Chang Xu

Existing works used to decrease the number or size of requested convolution filters for a minimum viable CNN on edge devices.

Attribute Aware Pooling for Pedestrian Attribute Recognition

no code implementations27 Jul 2019 Kai Han, Yunhe Wang, Han Shu, Chuanjian Liu, Chunjing Xu, Chang Xu

This paper expands the strength of deep convolutional neural networks (CNNs) to the pedestrian attribute recognition problem by devising a novel attribute aware pooling algorithm.

Pedestrian Attribute Recognition

Learning Instance-wise Sparsity for Accelerating Deep Models

no code implementations27 Jul 2019 Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu

Exploring deep convolutional neural networks of high efficiency and low memory usage is very essential for a wide variety of machine learning tasks.

Co-Evolutionary Compression for Unpaired Image Translation

2 code implementations ICCV 2019 Han Shu, Yunhe Wang, Xu Jia, Kai Han, Hanting Chen, Chunjing Xu, Qi Tian, Chang Xu

Generative adversarial networks (GANs) have been successfully used for considerable computer vision tasks, especially the image-to-image translation.

Image-to-Image Translation Translation

Deep Fitting Degree Scoring Network for Monocular 3D Object Detection

no code implementations CVPR 2019 Lijie Liu, Jiwen Lu, Chunjing Xu, Qi Tian, Jie zhou

In this paper, we propose to learn a deep fitting degree scoring network for monocular 3D object detection, which aims to score fitting degree between proposals and object conclusively.

Monocular 3D Object Detection object-detection +1

BridgeNet: A Continuity-Aware Probabilistic Network for Age Estimation

no code implementations CVPR 2019 Wanhua Li, Jiwen Lu, Jianjiang Feng, Chunjing Xu, Jie zhou, Qi Tian

Existing methods for age estimation usually apply a divide-and-conquer strategy to deal with heterogeneous data caused by the non-stationary aging process.

Age Estimation

Data-Free Learning of Student Networks

3 code implementations ICCV 2019 Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, Qi Tian

Learning portable neural networks is very essential for computer vision for the purpose that pre-trained heavy deep models can be well applied on edge devices such as mobile phones and micro sensors.

Neural Network Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.