no code implementations • 21 Feb 2024 • Brian Wheatman, Meghana Madhyastha, Randal Burns

Artificial intelligence workloads, especially transformer models, exhibit emergent sparsity in which computations perform selective sparse access to dense data.

1 code implementation • 6 Feb 2024 • Ariel Lubonja, Cencheng Shen, Carey Priebe, Randal Burns

New algorithms for embedding graphs have reduced the asymptotic complexity of finding low-dimensional representations.

no code implementations • 22 Sep 2023 • Robert Underwood, Meghana Madhastha, Randal Burns, Bogdan Nicolae

We describe how evolutionary patterns appear in distributed settings and opportunities for caching and improved scheduling.

no code implementations • 19 Jan 2022 • Ashwin De Silva, Rahul Ramesh, Lyle Ungar, Marshall Hussain Shuler, Noah J. Cowan, Michael Platt, Chen Li, Leyla Isik, Seung-Eon Roh, Adam Charles, Archana Venkataraman, Brian Caffo, Javier J. How, Justus M Kebschull, John W. Krakauer, Maxim Bichuch, Kaleab Alemayehu Kinfu, Eva Yezerets, Dinesh Jayaraman, Jong M. Shin, Soledad Villar, Ian Phillips, Carey E. Priebe, Thomas Hartung, Michael I. Miller, Jayanta Dey, Ningyuan, Huang, Eric Eaton, Ralph Etienne-Cummings, Elizabeth L. Ogburn, Randal Burns, Onyema Osuagwu, Brett Mensh, Alysson R. Muotri, Julia Brown, Chris White, Weiwei Yang, Andrei A. Rusu, Timothy Verstynen, Konrad P. Kording, Pratik Chaudhari, Joshua T. Vogelstein

We conjecture that certain sequences of tasks are not retrospectively learnable (in which the data distribution is fixed), but are prospectively learnable (in which distributions may be dynamic), suggesting that prospective learning is more difficult in kind than retrospective learning.

no code implementations • 10 Nov 2020 • Meghana Madhyastha, Kunal Lillaney, James Browne, Joshua Vogelstein, Randal Burns

We present methods to serialize and deserialize tree ensembles that optimize inference latency when models are not already loaded into memory.

no code implementations • 7 Jul 2019 • Disa Mhembere, Da Zheng, Carey E. Priebe, Joshua T. Vogelstein, Randal Burns

Emerging frameworks avoid the network bottleneck of distributed data with Semi-External Memory (SEM) that uses a single multicore node and operates on graphs larger than memory.

Distributed, Parallel, and Cluster Computing Databases

no code implementations • 5 Jul 2019 • Meghana Madhyastha, Percy Li, James Browne, Veronika Strnadova-Neeley, Carey E. Priebe, Randal Burns, Joshua T. Vogelstein

Empirical results on simulated and real data demonstrate that URerF is robust to high-dimensional noise, where as other methods, such as Isomap, UMAP, and FLANN, quickly deteriorate in such settings.

no code implementations • 9 Apr 2018 • Randal Burns, Eric Perlman, Alex Baden, William Gray Roncal, Ben Falk, Vikram Chandrashekhar, Forrest Collman, Sharmishtaa Seshamani, Jesse Patsolic, Kunal Lillaney, Michael Kazhdan, Robert Hider Jr., Derek Pryor, Jordan Matelsky, Timothy Gion, Priya Manavalan, Brock Wester, Mark Chevillet, Eric T. Trautman, Khaled Khairy, Eric Bridgeford, Dean M. Kleissas, Daniel J. Tward, Ailey K. Crow, Matthew A. Wright, Michael I. Miller, Stephen J. Smith, R. Jacob Vogelstein, Karl Deisseroth, Joshua T. Vogelstein

Big imaging data is becoming more prominent in brain sciences across spatiotemporal scales and phylogenies.

Neurons and Cognition Quantitative Methods

no code implementations • 9 Mar 2018 • Gregory Kiar, Robert J. Anderson, Alex Baden, Alexandra Badea, Eric W. Bridgeford, Andrew Champion, Vikram Chandrashekhar, Forrest Collman, Brandon Duderstadt, Alan C. Evans, Florian Engert, Benjamin Falk, Tristan Glatard, William R. Gray Roncal, David N. Kennedy, Jeremy Maitin-Shepard, Ryan A. Marren, Onyeka Nnaemeka, Eric Perlman, Sharmishtaas Seshamani, Eric T. Trautman, Daniel J. Tward, Pedro Antonio Valdés-Sosa, Qing Wang, Michael I. Miller, Randal Burns, Joshua T. Vogelstein

Neuroscientists are now able to acquire data at staggering rates across spatiotemporal scales.

1 code implementation • 5 Sep 2017 • Joshua T. Vogelstein, Eric Bridgeford, Minh Tang, Da Zheng, Christopher Douville, Randal Burns, Mauro Maggioni

To solve key biomedical problems, experimentalists now routinely measure millions or billions of features (dimensions) per sample, with the hope that data science techniques will be able to build accurate data-driven inferences.

1 code implementation • 28 Jun 2016 • Disa Mhembere, Da Zheng, Carey E. Priebe, Joshua T. Vogelstein, Randal Burns

The \textit{k-means NUMA Optimized Routine} (\textsf{knor}) library has (i) in-memory (\textsf{knori}), (ii) distributed memory (\textsf{knord}), and (iii) semi-external memory (\textsf{knors}) modules that radically improve the performance of k-means for varying memory and hardware budgets.

Distributed, Parallel, and Cluster Computing

2 code implementations • 21 Apr 2016 • Da Zheng, Disa Mhembere, Joshua T. Vogelstein, Carey E. Priebe, Randal Burns

R is one of the most popular programming languages for statistics and machine learning, but the R framework is relatively slow and unable to scale to large datasets.

Distributed, Parallel, and Cluster Computing

2 code implementations • 9 Feb 2016 • Da Zheng, Disa Mhembere, Vince Lyzinski, Joshua Vogelstein, Carey E. Priebe, Randal Burns

In contrast, we scale sparse matrix multiplication beyond memory capacity by implementing sparse matrix dense matrix multiplication (SpMM) in a semi-external memory (SEM) fashion; i. e., we keep the sparse matrix on commodity SSDs and dense matrices in memory.

Distributed, Parallel, and Cluster Computing

2 code implementations • 10 Jun 2015 • Tyler M. Tomita, James Browne, Cencheng Shen, Jaewon Chung, Jesse L. Patsolic, Benjamin Falk, Jason Yim, Carey E. Priebe, Randal Burns, Mauro Maggioni, Joshua T. Vogelstein

Unfortunately, these extensions forfeit one or more of the favorable properties of decision forests based on axis-aligned splits, such as robustness to many noise dimensions, interpretability, or computational efficiency.

2 code implementations • 30 Dec 2014 • Heng Wang, Da Zheng, Randal Burns, Carey Priebe

A canonical problem in graph mining is the detection of dense communities.

Social and Information Networks Physics and Society

no code implementations • 25 Nov 2014 • William Gray Roncal, Dean M. Kleissas, Joshua T. Vogelstein, Priya Manavalan, Kunal Lillaney, Michael Pekala, Randal Burns, R. Jacob Vogelstein, Carey E. Priebe, Mark A. Chevillet, Gregory D. Hager

Finally, we deploy a reference end-to-end version of the pipeline on a large, publicly available data set.

no code implementations • 16 Apr 2014 • Ayushi Sinha, William Gray Roncal, Narayanan Kasthuri, Jeff W. Lichtman, Randal Burns, Michael Kazhdan

Accurately estimating the wiring diagram of a brain, known as a connectome, at an ultrastructure level is an open research problem.

no code implementations • 16 Apr 2014 • Ayushi Sinha, William Gray Roncal, Narayanan Kasthuri, Ming Chuang, Priya Manavalan, Dean M. Kleissas, Joshua T. Vogelstein, R. Jacob Vogelstein, Randal Burns, Jeff W. Lichtman, Michael Kazhdan

The contribution of this work is the introduction of a straightforward and robust pipeline which annotates axoplasmic reticula with high precision, contributing towards advancements in automatic feature annotations in neural EM data.

no code implementations • 14 Mar 2014 • William Gray Roncal, Michael Pekala, Verena Kaynig-Fittkau, Dean M. Kleissas, Joshua T. Vogelstein, Hanspeter Pfister, Randal Burns, R. Jacob Vogelstein, Mark A. Chevillet, Gregory D. Hager

An open challenge problem at the forefront of modern neuroscience is to obtain a comprehensive mapping of the neural pathways that underlie human brain function; an enhanced understanding of the wiring diagram of the brain promises to lead to new breakthroughs in diagnosing and treating neurological disorders.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.