2 code implementations • 26 Jan 2022 • Jinjie Zhang, Yixuan Zhou, Rayan Saab
Additionally, our error analysis expands the results of previous work on GPFQ to handle general quantization alphabets, showing that for quantizing a single-layer network, the relative square error essentially decays linearly in the number of weights -- i. e., level of over-parametrization.
1 code implementation • ICLR 2021 • Jinjie Zhang, Rayan Saab
When $\mathcal{T}$ consists of well-spread (i. e., non-sparse) vectors, our embedding method applies a stable noise-shaping quantization scheme to $A x$ where $A\in\mathbb{R}^{m\times n}$ is a sparse Gaussian random matrix.
no code implementations • 4 Jun 2021 • Jinjie Zhang, Harish Kannan, Alexander Cloninger, Rayan Saab
We propose the use of low bit-depth Sigma-Delta and distributed noise-shaping methods for quantizing the Random Fourier features (RFFs) associated with shift-invariant kernels.
no code implementations • 20 Sep 2023 • Jinjie Zhang, Rayan Saab
Quantization is a widely used compression method that effectively reduces redundancies in over-parameterized neural networks.