Search Results for author: Yanjing Li

Found 17 papers, 8 papers with code

SySMOL: A Hardware-software Co-design Framework for Ultra-Low and Fine-Grained Mixed-Precision Neural Networks

no code implementations23 Nov 2023 Cyrus Zhou, Vaughn Richard, Pedro Savarese, Zachary Hassman, Michael Maire, Michael DiBrino, Yanjing Li

The design for mixed-precision networks that achieves optimized tradeoffs corresponds to an architecture that supports 1, 2, and 4-bit fixed-point operations with four configurable precision patterns, when coupled with system-aware training and inference optimization -- networks trained for this design achieve accuracies that closely match full-precision accuracies, while compressing and improving run-time efficiency of the neural networks drastically by 10-20x, compared to full-precision networks.

Inference Optimization Quantization

YFlows: Systematic Dataflow Exploration and Code Generation for Efficient Neural Network Inference using SIMD Architectures on CPUs

no code implementations1 Oct 2023 Cyrus Zhou, Zack Hassman, Ruize Xu, Dhirpal Shah, Vaugnn Richard, Yanjing Li

Our results demonstrate that the dataflow that keeps outputs in SIMD registers while also maximizing both input and weight reuse consistently yields the best performance for a wide variety of inference workloads, achieving up to 3x speedup for 8-bit neural networks, and up to 4. 8x speedup for binary neural networks, respectively, over the optimized implementations of neural networks today.

Code Generation Efficient Neural Network

DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit CNNs

no code implementations27 Jun 2023 Yanjing Li, Sheng Xu, Xianbin Cao, Li'an Zhuo, Baochang Zhang, Tian Wang, Guodong Guo

One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS by taking advantage of the strengths of each in a unified framework, while searching the 1-bit CNNs is more challenging due to the more complicated processes involved.

Neural Architecture Search object-detection +2

Bi-ViT: Pushing the Limit of Vision Transformer Quantization

no code implementations21 May 2023 Yanjing Li, Sheng Xu, Mingbao Lin, Xianbin Cao, Chuanjian Liu, Xiao Sun, Baochang Zhang

Vision transformers (ViTs) quantization offers a promising prospect to facilitate deploying large pre-trained networks on resource-limited devices.

Binarization Quantization

Q-DETR: An Efficient Low-Bit Quantized Detection Transformer

1 code implementation CVPR 2023 Sheng Xu, Yanjing Li, Mingbao Lin, Peng Gao, Guodong Guo, Jinhu Lu, Baochang Zhang

At the upper level, we introduce a new foreground-aware query matching scheme to effectively transfer the teacher information to distillation-desired features to minimize the conditional information entropy.

object-detection Object Detection +1

Implicit Diffusion Models for Continuous Super-Resolution

1 code implementation CVPR 2023 Sicheng Gao, Xuhui Liu, Bohan Zeng, Sheng Xu, Yanjing Li, Xiaoyan Luo, Jianzhuang Liu, XianTong Zhen, Baochang Zhang

IDM integrates an implicit neural representation and a denoising diffusion model in a unified end-to-end framework, where the implicit neural representation is adopted in the decoding process to learn continuous-resolution representation.

Denoising Image Super-Resolution

Resilient Binary Neural Network

1 code implementation2 Feb 2023 Sheng Xu, Yanjing Li, Teli Ma, Mingbao Lin, Hao Dong, Baochang Zhang, Peng Gao, Jinhu Lv

In this paper, we introduce a Resilient Binary Neural Network (ReBNN) to mitigate the frequent oscillation for better BNNs' training.

Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer

1 code implementation13 Oct 2022 Yanjing Li, Sheng Xu, Baochang Zhang, Xianbin Cao, Peng Gao, Guodong Guo

The large pre-trained vision transformers (ViTs) have demonstrated remarkable performance on various visual tasks, but suffer from expensive computational and memory cost problems when deployed on resource-constrained devices.

Quantization

IDa-Det: An Information Discrepancy-aware Distillation for 1-bit Detectors

1 code implementation7 Oct 2022 Sheng Xu, Yanjing Li, Bohan Zeng, Teli Ma, Baochang Zhang, Xianbin Cao, Peng Gao, Jinhu Lv

This explains why existing KD methods are less effective for 1-bit detectors, caused by a significant information discrepancy between the real-valued teacher and the 1-bit student.

Knowledge Distillation object-detection +1

Recurrent Bilinear Optimization for Binary Neural Networks

2 code implementations4 Sep 2022 Sheng Xu, Yanjing Li, Tiancheng Wang, Teli Ma, Baochang Zhang, Peng Gao, Yu Qiao, Jinhu Lv, Guodong Guo

To address this issue, Recurrent Bilinear Optimization is proposed to improve the learning process of BNNs (RBONNs) by associating the intrinsic bilinear variables in the back propagation process.

object-detection Object Detection

TerViT: An Efficient Ternary Vision Transformer

no code implementations20 Jan 2022 Sheng Xu, Yanjing Li, Teli Ma, Bohan Zeng, Baochang Zhang, Peng Gao, Jinhu Lv

Vision transformers (ViTs) have demonstrated great potential in various visual tasks, but suffer from expensive computational and memory cost problems when deployed on resource-constrained devices.

POEM: 1-bit Point-wise Operations based on Expectation-Maximization for Efficient Point Cloud Processing

no code implementations26 Nov 2021 Sheng Xu, Yanjing Li, Junhe Zhao, Baochang Zhang, Guodong Guo

Real-time point cloud processing is fundamental for lots of computer vision tasks, while still challenged by the computational problem on resource-limited edge devices.

Graph Neural Based End-to-end Data Association Framework for Online Multiple-Object Tracking

1 code implementation11 Jul 2019 Xiaolong Jiang, Peizhao Li, Yanjing Li, Xian-Tong Zhen

In this work, we present an end-to-end framework to settle data association in online Multiple-Object Tracking (MOT).

Multiple Object Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.