Search Results for author: Lauren Milechin

Found 13 papers, 0 papers with code

Mathematics of Digital Hyperspace

no code implementations28 Mar 2021 Jeremy Kepner, Timothy Davis, Vijay Gadepally, Hayden Jananthan, Lauren Milechin

The GraphBLAS standard currently supports hypergraphs, hypersparse matrices, the mathematics required for semilinks, and seamlessly performs graph, network, and matrix operations.

Navigate

GraphChallenge.org Sparse Deep Neural Network Performance

no code implementations25 Mar 2020 Jeremy Kepner, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Albert Reuther, Ryan Robinett, Sid Samsi

The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems.

Sparse Deep Neural Network Graph Challenge

no code implementations2 Sep 2019 Jeremy Kepner, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Ryan Robinett, Sid Samsi

The Sparse DNN Challenge is based on a mathematically well-defined DNN inference computation and can be implemented in any programming environment.

Securing HPC using Federated Authentication

no code implementations20 Aug 2019 Andrew Prout, William Arcand, David Bestor, Bill Bergeron, Chansup Byun, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Antonio Rosa, Siddharth Samsi, Charles Yee, Albert Reuther, Jeremy Kepner

Federated authentication can drastically reduce the overhead of basic account maintenance while simultaneously improving overall system security.

Distributed, Parallel, and Cluster Computing Cryptography and Security

Streaming 1.9 Billion Hypersparse Network Updates per Second with D4M

no code implementations6 Jul 2019 Jeremy Kepner, Vijay Gadepally, Lauren Milechin, Siddharth Samsi, William Arcand, David Bestor, William Bergeron, Chansup Byun, Matthew Hubbell, Michael Houle, Michael Jones, Anne Klein, Peter Michaleas, Julie Mullen, Andrew Prout, Antonio Rosa, Charles Yee, Albert Reuther

This work describes the design and performance optimization of an implementation of hierarchical associative arrays that reduces memory pressure and dramatically increases the update rate into an associative array.

A Billion Updates per Second Using 30,000 Hierarchical In-Memory D4M Databases

no code implementations3 Feb 2019 Jeremy Kepner, Vijay Gadepally, Lauren Milechin, Siddharth Samsi, William Arcand, David Bestor, William Bergeron, Chansup Byun, Matthew Hubbell, Micheal Houle, Micheal Jones, Anne Klein, Peter Michaleas, Julie Mullen, Andrew Prout, Antonio Rosa, Charles Yee, Albert Reuther

Streaming updates to a large associative array requires a hierarchical implementation to optimize the performance of the memory hierarchy.

Databases Distributed, Parallel, and Cluster Computing Data Structures and Algorithms Networking and Internet Architecture

Training Behavior of Sparse Neural Network Topologies

no code implementations30 Sep 2018 Simon Alford, Ryan Robinett, Lauren Milechin, Jeremy Kepner

We test pruning-based topologies, which are derived from an initially dense network whose connections are pruned, as well as RadiX-Nets, a class of network topologies with proven connectivity and sparsity properties.

Sparse Deep Neural Network Exact Solutions

no code implementations6 Jul 2018 Jeremy Kepner, Vijay Gadepally, Hayden Jananthan, Lauren Milechin, Sid Samsi

This work uses associative array DNNs to construct exact solutions and corresponding perturbation models to the rectified linear unit (ReLU) DNN equations that can be used to construct test vectors for sparse DNN implementations over various precisions.

Benchmarking Data Analysis and Machine Learning Applications on the Intel KNL Many-Core Processor

no code implementations12 Jul 2017 Chansup Byun, Jeremy Kepner, William Arcand, David Bestor, Bill Bergeron, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Andrew Prout, Antonio Rosa, Siddharth Samsi, Charles Yee, Albert Reuther

Thus, the performance of these applications on KNL systems is of high interest to LLSC users and the broader data analysis and machine learning communities.

Performance Instrumentation and Methods for Astrophysics Distributed, Parallel, and Cluster Computing Computational Physics

Cannot find the paper you are looking for? You can Submit a new open access paper.