no code implementations • 9 Jun 2023 • Kameron Decker Harris, Oscar López, Angus Read, Yizhe Zhu
However, numerical experiments illustrate the dependence of the reconstruction error on the spectral gap for the practical max-quasinorm, ridge penalty, and Poisson loss minimization algorithms.
1 code implementation • 4 May 2022 • Seth Daetwiler, Angus Read, Jessica Stillwell, Kameron Decker Harris
Scientists construct connectomes, comprehensive descriptions of neuronal connections across a brain, in order to better understand and model brain function.
2 code implementations • 23 Oct 2019 • Kameron Decker Harris, Yizhe Zhu
We provide a novel analysis of low-rank tensor completion based on hypergraph expanders.
1 code implementation • NeurIPS Workshop Neuro_AI 2019 • Kameron Decker Harris
We identify three specific advantages of sparsity: additive function approximation is a powerful inductive bias that limits the curse of dimensionality, sparse networks are stable to outlier noise in the inputs, and sparse random features are scalable.
2 code implementations • 21 May 2019 • Kameron Decker Harris, Aleksandr Aravkin, Rajesh Rao, Bingni Wen Brunton
In each time window, we assume the data follow a linear model parameterized by a system matrix, and we model this stack of potentially different system matrices as a low rank tensor.
1 code implementation • NeurIPS 2016 • Kameron Decker Harris, Stefan Mihalas, Eric Shea-Brown
We demonstrate the efficacy of a low rank version on visual cortex data and discuss the possibility of extending this to a whole-brain connectivity matrix at the voxel scale.
no code implementations • 15 Jun 2014 • Peter Sheridan Dodds, Eric M. Clark, Suma Desu, Morgan R. Frank, Andrew J. Reagan, Jake Ryland Williams, Lewis Mitchell, Kameron Decker Harris, Isabel M. Kloumann, James P. Bagrow, Karine Megerdoomian, Matthew T. McMahon, Brian F. Tivnan, Christopher M. Danforth
Using human evaluation of 100, 000 words spread across 24 corpora in 10 languages diverse in origin and culture, we present evidence of a deep imprint of human sociality in language, observing that (1) the words of natural human language possess a universal positivity bias; (2) the estimated emotional content of words is consistent between languages under translation; and (3) this positivity bias is strongly independent of frequency of word usage.