Search Results for author: Rayan Saab

Found 11 papers, 3 papers with code

SPFQ: A Stochastic Algorithm and Its Error Analysis for Neural Network Quantization

no code implementations20 Sep 2023 Jinjie Zhang, Rayan Saab

Quantization is a widely used compression method that effectively reduces redundancies in over-parameterized neural networks.

Quantization

A simple approach for quantizing neural networks

no code implementations7 Sep 2022 Johannes Maly, Rayan Saab

In this short note, we propose a new method for quantizing the weights of a fully trained neural network.

Quantization

Spectrally Adaptive Common Spatial Patterns

no code implementations9 Feb 2022 Mahta Mousavi, Eric Lybrand, Shuangquan Feng, Shuai Tang, Rayan Saab, Virginia de Sa

In this work, we propose a novel algorithm called Spectrally Adaptive Common Spatial Patterns (SACSP) that improves CSP by learning a temporal/spectral filter for each spatial filter so that the spatial filters are concentrated on the most relevant temporal frequencies for each user.

EEG Motor Imagery

Post-training Quantization for Neural Networks with Provable Guarantees

2 code implementations26 Jan 2022 Jinjie Zhang, Yixuan Zhou, Rayan Saab

Additionally, our error analysis expands the results of previous work on GPFQ to handle general quantization alphabets, showing that for quantizing a single-layer network, the relative square error essentially decays linearly in the number of weights -- i. e., level of over-parametrization.

Quantization

Sigma-Delta and Distributed Noise-Shaping Quantization Methods for Random Fourier Features

no code implementations4 Jun 2021 Jinjie Zhang, Harish Kannan, Alexander Cloninger, Rayan Saab

We propose the use of low bit-depth Sigma-Delta and distributed noise-shaping methods for quantizing the Random Fourier features (RFFs) associated with shift-invariant kernels.

Quantization

A Greedy Algorithm for Quantizing Neural Networks

1 code implementation29 Oct 2020 Eric Lybrand, Rayan Saab

This simple algorithm is equivalent to running a dynamical system, which we prove is stable for quantizing a single-layer neural network (or, alternatively, for quantizing the first layer of a multi-layer network) when the training data are Gaussian.

Quantization

Faster Binary Embeddings for Preserving Euclidean Distances

1 code implementation ICLR 2021 Jinjie Zhang, Rayan Saab

When $\mathcal{T}$ consists of well-spread (i. e., non-sparse) vectors, our embedding method applies a stable noise-shaping quantization scheme to $A x$ where $A\in\mathbb{R}^{m\times n}$ is a sparse Gaussian random matrix.

Quantization

Random Vector Functional Link Networks for Function Approximation on Manifolds

no code implementations30 Jul 2020 Deanna Needell, Aaron A. Nelson, Rayan Saab, Palina Salanevich, Olov Schavemaker

We provide a (corrected) rigorous proof that the Igelnik and Pao construction is a universal approximator for continuous functions on compact domains, with approximation error decaying asymptotically like $O(1/\sqrt{n})$ for the number $n$ of network nodes.

Fast binary embeddings, and quantized compressed sensing with structured matrices

no code implementations26 Jan 2018 Thang Huynh, Rayan Saab

Our methods rely on quantizing fast Johnson-Lindenstrauss embeddings based on bounded orthonormal systems and partial circulant ensembles, both of which admit fast transforms.

Quantization

Simple Classification using Binary Data

no code implementations6 Jul 2017 Deanna Needell, Rayan Saab, Tina Woolf

Binary, or one-bit, representations of data arise naturally in many applications, and are appealing in both hardware implementations and algorithm design.

Classification General Classification

One-bit compressive sensing with norm estimation

no code implementations28 Apr 2014 Karin Knudson, Rayan Saab, Rachel Ward

Consider the recovery of an unknown signal ${x}$ from quantized linear measurements.

Compressive Sensing

Cannot find the paper you are looking for? You can Submit a new open access paper.