Simple Analysis of Johnson-Lindenstrauss Transform under Neuroscience Constraints

20 Aug 2020  ·  Maciej Skorski ·

The paper re-analyzes a version of the celebrated Johnson-Lindenstrauss Lemma, in which matrices are subjected to constraints that naturally emerge from neuroscience applications: a) sparsity and b) sign-consistency. This particular variant was studied first by Allen-Zhu, Gelashvili, Micali, Shavit and more recently by Jagadeesan (RANDOM'19). The contribution of this work is a novel proof, which in contrast to previous works a) uses the modern probability toolkit, particularly basics of sub-gaussian and sub-gamma estimates b) is self-contained, with no dependencies on subtle third-party results c) offers explicit constants. At the heart of our proof is a novel variant of Hanson-Wright Lemma (on concentration of quadratic forms). Of independent interest are also auxiliary facts on sub-gaussian random variables.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here