Search Results for author: Kameron Decker Harris

Found 6 papers, 5 papers with code

BrainViewer: interacting with spatial connectome data at the mesoscale

1 code implementation4 May 2022 Seth Daetwiler, Angus Read, Jessica Stillwell, Kameron Decker Harris

Scientists construct connectomes, comprehensive descriptions of neuronal connections across a brain, in order to better understand and model brain function.

Deterministic tensor completion with hypergraph expanders

2 code implementations23 Oct 2019 Kameron Decker Harris, Yizhe Zhu

We provide a novel analysis of low-rank tensor completion based on hypergraph expanders.

Additive function approximation in the brain

1 code implementation NeurIPS Workshop Neuro_AI 2019 Kameron Decker Harris

We identify three specific advantages of sparsity: additive function approximation is a powerful inductive bias that limits the curse of dimensionality, sparse networks are stable to outlier noise in the inputs, and sparse random features are scalable.

Hippocampus Inductive Bias

Time-varying Autoregression with Low Rank Tensors

2 code implementations21 May 2019 Kameron Decker Harris, Aleksandr Aravkin, Rajesh Rao, Bingni Wen Brunton

In each time window, we assume the data follow a linear model parameterized by a system matrix, and we model this stack of potentially different system matrices as a low rank tensor.

High resolution neural connectivity from incomplete tracing data using nonnegative spline regression

1 code implementation NeurIPS 2016 Kameron Decker Harris, Stefan Mihalas, Eric Shea-Brown

We demonstrate the efficacy of a low rank version on visual cortex data and discuss the possibility of extending this to a whole-brain connectivity matrix at the voxel scale.

Matrix Completion regression

Human language reveals a universal positivity bias

no code implementations15 Jun 2014 Peter Sheridan Dodds, Eric M. Clark, Suma Desu, Morgan R. Frank, Andrew J. Reagan, Jake Ryland Williams, Lewis Mitchell, Kameron Decker Harris, Isabel M. Kloumann, James P. Bagrow, Karine Megerdoomian, Matthew T. McMahon, Brian F. Tivnan, Christopher M. Danforth

Using human evaluation of 100, 000 words spread across 24 corpora in 10 languages diverse in origin and culture, we present evidence of a deep imprint of human sociality in language, observing that (1) the words of natural human language possess a universal positivity bias; (2) the estimated emotional content of words is consistent between languages under translation; and (3) this positivity bias is strongly independent of frequency of word usage.


Cannot find the paper you are looking for? You can Submit a new open access paper.