Search Results for author: Kaiqi Zhang

Found 10 papers, 3 papers with code

Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks

no code implementations4 Jul 2023 Kaiqi Zhang, Zixuan Zhang, Minshuo Chen, Yuma Takeda, Mengdi Wang, Tuo Zhao, Yu-Xiang Wang

Convolutional residual neural networks (ConvResNets), though overparameterized, can achieve remarkable prediction performance in practice, which cannot be well explained by conventional wisdom.

Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks

no code implementations13 Jun 2022 Kaiqi Zhang, Ming Yin, Yu-Xiang Wang

We propose a quasi neural network to approximate the distribution propagation, which is a neural network with continuous parameters and smooth activation function.

Quantization

Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?

no code implementations20 Apr 2022 Kaiqi Zhang, Yu-Xiang Wang

We consider a "Parallel NN" variant of deep ReLU networks and show that the standard weight decay is equivalent to promoting the $\ell_p$-sparsity ($0<p<1$) of the coefficient vector of an end-to-end learned function bases, i. e., a dictionary.

regression

3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration

no code implementations11 May 2021 Yao Chen, Cole Hawkins, Kaiqi Zhang, Zheng Zhang, Cong Hao

This paper emphasizes the importance and efficacy of training, quantization and accelerator design, and calls for more research breakthroughs in the area for AI on the edge.

Model Compression Quantization

Active Subspace of Neural Networks: Structural Analysis and Universal Attacks

1 code implementation29 Oct 2019 Chunfeng Cui, Kaiqi Zhang, Talgat Daulbaev, Julia Gusak, Ivan Oseledets, Zheng Zhang

Secondly, we propose analyzing the vulnerability of a neural network using active subspace and finding an additive universal adversarial attack vector that can misclassify a dataset with a high probability.

Adversarial Attack Uncertainty Quantification

Tucker Tensor Decomposition on FPGA

no code implementations28 Jun 2019 Kaiqi Zhang, Xiyuan Zhang, Zheng Zhang

This paper presents an hardware accelerator for a classical tensor computation framework, Tucker decomposition.

Signal Processing Hardware Architecture

StructADMM: A Systematic, High-Efficiency Framework of Structured Weight Pruning for DNNs

1 code implementation29 Jul 2018 Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Xiaolong Ma, Ning Liu, Linfeng Zhang, Jian Tang, Kaisheng Ma, Xue Lin, Makan Fardad, Yanzhi Wang

Without loss of accuracy on the AlexNet model, we achieve 2. 58X and 3. 65X average measured speedup on two GPUs, clearly outperforming the prior work.

Model Compression

A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers

3 code implementations ECCV 2018 Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, Yanzhi Wang

We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning.

Image Classification Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.