Search Results for author: Zechun Liu

Found 22 papers, 13 papers with code

Stereo Neural Vernier Caliper

1 code implementation21 Mar 2022 Shichao Li, Zechun Liu, Zhiqiang Shen, Kwang-Ting Cheng

We propose a new object-centric framework for learning-based stereo 3D object detection.

3D Object Detection

Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space

1 code implementation3 Jan 2022 Arnav Chavan, Zhiqiang Shen, Zhuang Liu, Zechun Liu, Kwang-Ting Cheng, Eric Xing

This paper explores the feasibility of finding an optimal sub-model from a vision transformer and introduces a pure vision transformer slimming (ViT-Slim) framework.

Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation

1 code implementation29 Nov 2021 Zechun Liu, Kwang-Ting Cheng, Dong Huang, Eric Xing, Zhiqiang Shen

The nonuniform quantization strategy for compressing neural networks usually achieves better performance than its counterpart, i. e., uniform strategy, due to its superior representational capacity.

Quantization

Sliced Recursive Transformer

1 code implementation9 Nov 2021 Zhiqiang Shen, Zechun Liu, Eric Xing

The proposed sliced recursive operation allows us to build a transformer with more than 100 or even 1000 layers effortlessly under a still small size (13~15M), to avoid difficulties in optimization when the model size is too large.

Image Classification

How Do Adam and Training Strategies Help BNNs Optimization?

no code implementations21 Jun 2021 Zechun Liu, Zhiqiang Shen, Shichao Li, Koen Helwegen, Dong Huang, Kwang-Ting Cheng

We show the regularization effect of second-order momentum in Adam is crucial to revitalize the weights that are dead due to the activation saturation in BNNs.

"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization

1 code implementation16 Apr 2021 Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen, Zhangyang Wang

However, the BN layer is costly to calculate and is typically implemented with non-binary parameters, leaving a hurdle for the efficient implementation of BNN training.

Image Classification

S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration

1 code implementation CVPR 2021 Zhiqiang Shen, Zechun Liu, Jie Qin, Lei Huang, Kwang-Ting Cheng, Marios Savvides

In this paper, we focus on this more difficult scenario: learning networks where both weights and activations are binary, meanwhile, without any human annotated labels.

Contrastive Learning Self-Supervised Learning

Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning

no code implementations8 Feb 2021 Zhiqiang Shen, Zechun Liu, Jie Qin, Marios Savvides, Kwang-Ting Cheng

A common practice for this task is to train a model on the base set first and then transfer to novel classes through fine-tuning (Here fine-tuning procedure is defined as transferring knowledge from base to novel data, i. e. learning to transfer in few-shot scenario.)

Few-Shot Learning

Conditional Link Prediction of Category-Implicit Keypoint Detection

no code implementations29 Nov 2020 Ellen Yi-Ge, Rui Fan, Zechun Liu, Zhiqiang Shen

Keypoints of objects reflect their concise abstractions, while the corresponding connection links (CL) build the skeleton by detecting the intrinsic relations between keypoints.

Keypoint Detection Link Prediction

Weight-dependent Gates for Network Pruning

no code implementations4 Jul 2020 Yun Li, Zechun Liu, Weiqun Wu, Haotian Yao, Xiangyu Zhang, Chi Zhang, Baoqun Yin

In this paper, a simple yet effective network pruning framework is proposed to simultaneously address the problems of pruning indicator, pruning ratio, and efficiency constraint.

Network Pruning

Joint Multi-Dimension Pruning via Numerical Gradient Update

no code implementations18 May 2020 Zechun Liu, Xiangyu Zhang, Zhiqiang Shen, Zhe Li, Yichen Wei, Kwang-Ting Cheng, Jian Sun

To tackle these three naturally different dimensions, we proposed a general framework by defining pruning as seeking the best pruning vector (i. e., the numerical value of layer-wise channel number, spacial size, depth) and construct a unique mapping from the pruning vector to the pruned network structures.

Binarizing MobileNet via Evolution-based Searching

no code implementations CVPR 2020 Hai Phan, Zechun Liu, Dang Huynh, Marios Savvides, Kwang-Ting Cheng, Zhiqiang Shen

Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs), assuming an approximately optimal trade-off between computational cost and model accuracy.

ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions

1 code implementation ECCV 2020 Zechun Liu, Zhiqiang Shen, Marios Savvides, Kwang-Ting Cheng

In this paper, we propose several ideas for enhancing a binary network to close its accuracy gap from real-valued networks without incurring any additional computational cost.

Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization

3 code implementations NeurIPS 2019 Koen Helwegen, James Widdicombe, Lukas Geiger, Zechun Liu, Kwang-Ting Cheng, Roeland Nusselder

Together, the redefinition of latent weights as inertia and the introduction of Bop enable a better understanding of BNN optimization and open up the way for further improvements in training methodologies for BNNs.

Bi-Real Net: Binarizing Deep Network Towards Real-Network Performance

1 code implementation4 Nov 2018 Zechun Liu, Wenhan Luo, Baoyuan Wu, Xin Yang, Wei Liu, Kwang-Ting Cheng

To address the training difficulty, we propose a training algorithm using a tighter approximation to the derivative of the sign function, a magnitude-aware gradient for weight updating, a better initialization method, and a two-step scheme for training a deep network.

Depth Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.