You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 7 Sep 2022 • Johannes Maly, Rayan Saab

In this short note, we propose a new method for quantizing the weights of a fully trained neural network.

no code implementations • 9 Feb 2022 • Mahta Mousavi, Eric Lybrand, Shuangquan Feng, Shuai Tang, Rayan Saab, Virginia de Sa

In this work, we propose a novel algorithm called Spectrally Adaptive Common Spatial Patterns (SACSP) that improves CSP by learning a temporal/spectral filter for each spatial filter so that the spatial filters are concentrated on the most relevant temporal frequencies for each user.

1 code implementation • 26 Jan 2022 • Jinjie Zhang, Yixuan Zhou, Rayan Saab

Additionally, our error analysis expands the results of previous work on GPFQ to handle general quantization alphabets, showing that for quantizing a single-layer network, the relative square error essentially decays linearly in the number of weights -- i. e., level of over-parametrization.

no code implementations • 4 Jun 2021 • Jinjie Zhang, Harish Kannan, Alexander Cloninger, Rayan Saab

We propose the use of low bit-depth Sigma-Delta and distributed noise-shaping methods for quantizing the Random Fourier features (RFFs) associated with shift-invariant kernels.

1 code implementation • 29 Oct 2020 • Eric Lybrand, Rayan Saab

This simple algorithm is equivalent to running a dynamical system, which we prove is stable for quantizing a single-layer neural network (or, alternatively, for quantizing the first layer of a multi-layer network) when the training data are Gaussian.

1 code implementation • ICLR 2021 • Jinjie Zhang, Rayan Saab

When $\mathcal{T}$ consists of well-spread (i. e., non-sparse) vectors, our embedding method applies a stable noise-shaping quantization scheme to $A x$ where $A\in\mathbb{R}^{m\times n}$ is a sparse Gaussian random matrix.

no code implementations • 30 Jul 2020 • Deanna Needell, Aaron A. Nelson, Rayan Saab, Palina Salanevich

We provide a (corrected) rigorous proof that the Igelnik and Pao construction is a universal approximator for continuous functions on compact domains, with approximation error decaying asymptotically like $O(1/\sqrt{n})$ for the number $n$ of network nodes.

no code implementations • 26 Jan 2018 • Thang Huynh, Rayan Saab

Our methods rely on quantizing fast Johnson-Lindenstrauss embeddings based on bounded orthonormal systems and partial circulant ensembles, both of which admit fast transforms.

no code implementations • 6 Jul 2017 • Deanna Needell, Rayan Saab, Tina Woolf

Binary, or one-bit, representations of data arise naturally in many applications, and are appealing in both hardware implementations and algorithm design.

no code implementations • 28 Apr 2014 • Karin Knudson, Rayan Saab, Rachel Ward

Consider the recovery of an unknown signal ${x}$ from quantized linear measurements.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.