Search Results for author: Jinjie Zhang

Found 4 papers, 2 papers with code

SPFQ: A Stochastic Algorithm and Its Error Analysis for Neural Network Quantization

no code implementations20 Sep 2023 Jinjie Zhang, Rayan Saab

Quantization is a widely used compression method that effectively reduces redundancies in over-parameterized neural networks.

Quantization

Post-training Quantization for Neural Networks with Provable Guarantees

2 code implementations26 Jan 2022 Jinjie Zhang, Yixuan Zhou, Rayan Saab

Additionally, our error analysis expands the results of previous work on GPFQ to handle general quantization alphabets, showing that for quantizing a single-layer network, the relative square error essentially decays linearly in the number of weights -- i. e., level of over-parametrization.

Quantization

Sigma-Delta and Distributed Noise-Shaping Quantization Methods for Random Fourier Features

no code implementations4 Jun 2021 Jinjie Zhang, Harish Kannan, Alexander Cloninger, Rayan Saab

We propose the use of low bit-depth Sigma-Delta and distributed noise-shaping methods for quantizing the Random Fourier features (RFFs) associated with shift-invariant kernels.

Quantization

Faster Binary Embeddings for Preserving Euclidean Distances

1 code implementation ICLR 2021 Jinjie Zhang, Rayan Saab

When $\mathcal{T}$ consists of well-spread (i. e., non-sparse) vectors, our embedding method applies a stable noise-shaping quantization scheme to $A x$ where $A\in\mathbb{R}^{m\times n}$ is a sparse Gaussian random matrix.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.