Search Results for author: Jeremy Kepner

Found 35 papers, 3 papers with code

Testing RadiX-Nets: Advances in Viable Sparse Topologies

no code implementations6 Nov 2023 Kevin Kwak, Zack West, Hayden Jananthan, Jeremy Kepner

The exponential growth of data has sparked computational demands on ML research and industry use.

Lincoln AI Computing Survey (LAICS) Update

1 code implementation13 Oct 2023 Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner

Finally, a brief description of each of the new accelerators that have been added in the survey this year is included.

Are ChatGPT and Other Similar Systems the Modern Lernaean Hydras of AI?

no code implementations15 Jun 2023 Dimitrios Ioannidis, Jeremy Kepner, Andrew Bowne, Harriet S. Bryant

The rise of Generative Artificial Intelligence systems ("AI systems") has created unprecedented social engagement.

Code Generation

AI Enabled Maneuver Identification via the Maneuver Identification Challenge

no code implementations28 Nov 2022 Kaira Samuel, Matthew LaRosa, Kyle McAlpin, Morgan Schaefer, Brandon Swenson, Devin Wasilefsky, Yan Wu, Dan Zhao, Jeremy Kepner

Artificial intelligence (AI) has enormous potential to improve Air Force pilot training by providing actionable feedback to pilot trainees on the quality of their maneuvers and enabling instructor-less flying familiarization for early-stage trainees in low-cost simulators.

Naming Schema for a Human Brain-Scale Neural Network

no code implementations22 Sep 2021 Morgan Schaefer, Lauren Michelin, Jeremy Kepner

Deep neural networks have become increasingly large and sparse, allowing for the storage of large-scale neural networks with decreased costs of storage and computation.

AI Accelerator Survey and Trends

1 code implementation18 Sep 2021 Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner

Over the past several years, new machine learning accelerators were being announced and released every month for a variety of applications from speech recognition, video object detection, assisted driving, and many data center applications.

Benchmarking Computational Efficiency +4

Maneuver Identification Challenge

no code implementations25 Aug 2021 Kaira Samuel, Vijay Gadepally, David Jacobs, Michael Jones, Kyle McAlpin, Kyle Palko, Ben Paulk, Sid Samsi, Ho Chit Siu, Charles Yee, Jeremy Kepner

The Maneuver Identification Challenge hosted at maneuver-id. mit. edu provides thousands of trajectories collected from pilots practicing in flight simulators, descriptions of maneuvers, and examples of these maneuvers performed by experienced pilots.

Mathematics of Digital Hyperspace

no code implementations28 Mar 2021 Jeremy Kepner, Timothy Davis, Vijay Gadepally, Hayden Jananthan, Lauren Milechin

The GraphBLAS standard currently supports hypergraphs, hypersparse matrices, the mathematics required for semilinks, and seamlessly performs graph, network, and matrix operations.

Navigate

Survey of Machine Learning Accelerators

no code implementations1 Sep 2020 Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner

New machine learning accelerators are being announced and released each month for a variety of applications from speech recognition, video object detection, assisted driving, and many data center applications.

BIG-bench Machine Learning object-detection +3

Layer-Parallel Training with GPU Concurrency of Deep Residual Neural Networks via Nonlinear Multigrid

no code implementations14 Jul 2020 Andrew C. Kirby, Siddharth Samsi, Michael Jones, Albert Reuther, Jeremy Kepner, Vijay Gadepally

A Multigrid Full Approximation Storage algorithm for solving Deep Residual Networks is developed to enable neural network parallelized layer-wise training and concurrent computational kernel execution on GPUs.

GraphChallenge.org Sparse Deep Neural Network Performance

no code implementations25 Mar 2020 Jeremy Kepner, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Albert Reuther, Ryan Robinett, Sid Samsi

The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems.

GraphChallenge.org Triangle Counting Performance

no code implementations18 Mar 2020 Siddharth Samsi, Jeremy Kepner, Vijay Gadepally, Michael Hurley, Michael Jones, Edward Kao, Sanjeev Mohindra, Albert Reuther, Steven Smith, William Song, Diane Staheli, Paul Monticciolo

In 2017, 2018, and 2019 many triangle counting submissions were received from a wide range of authors and organizations.

Distributed, Parallel, and Cluster Computing Performance

Sparse Deep Neural Network Graph Challenge

no code implementations2 Sep 2019 Jeremy Kepner, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Ryan Robinett, Sid Samsi

The Sparse DNN Challenge is based on a mathematically well-defined DNN inference computation and can be implemented in any programming environment.

Survey and Benchmarking of Machine Learning Accelerators

no code implementations29 Aug 2019 Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner

Advances in multicore processors and accelerators have opened the flood gates to greater exploration and application of machine learning techniques to a variety of applications.

Performance B.8; C.4

Securing HPC using Federated Authentication

no code implementations20 Aug 2019 Andrew Prout, William Arcand, David Bestor, Bill Bergeron, Chansup Byun, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Antonio Rosa, Siddharth Samsi, Charles Yee, Albert Reuther, Jeremy Kepner

Federated authentication can drastically reduce the overhead of basic account maintenance while simultaneously improving overall system security.

Distributed, Parallel, and Cluster Computing Cryptography and Security

Streaming 1.9 Billion Hypersparse Network Updates per Second with D4M

no code implementations6 Jul 2019 Jeremy Kepner, Vijay Gadepally, Lauren Milechin, Siddharth Samsi, William Arcand, David Bestor, William Bergeron, Chansup Byun, Matthew Hubbell, Michael Houle, Michael Jones, Anne Klein, Peter Michaleas, Julie Mullen, Andrew Prout, Antonio Rosa, Charles Yee, Albert Reuther

This work describes the design and performance optimization of an implementation of hierarchical associative arrays that reduces memory pressure and dramatically increases the update rate into an associative array.

AI Enabling Technologies: A Survey

no code implementations8 May 2019 Vijay Gadepally, Justin Goodwin, Jeremy Kepner, Albert Reuther, Hayley Reynolds, Siddharth Samsi, Jonathan Su, David Martinez

Artificial Intelligence (AI) has the opportunity to revolutionize the way the United States Department of Defense (DoD) and Intelligence Community (IC) address the challenges of evolving threats, data deluge, and rapid courses of action.

RadiX-Net: Structured Sparse Matrices for Deep Neural Networks

no code implementations30 Apr 2019 Ryan A. Robinett, Jeremy Kepner

We further present a functional-analytic conjecture based on the longstanding observation that sparse neural network topologies can attain the same expressive power as dense counterparts

A Billion Updates per Second Using 30,000 Hierarchical In-Memory D4M Databases

no code implementations3 Feb 2019 Jeremy Kepner, Vijay Gadepally, Lauren Milechin, Siddharth Samsi, William Arcand, David Bestor, William Bergeron, Chansup Byun, Matthew Hubbell, Micheal Houle, Micheal Jones, Anne Klein, Peter Michaleas, Julie Mullen, Andrew Prout, Antonio Rosa, Charles Yee, Albert Reuther

Streaming updates to a large associative array requires a hierarchical implementation to optimize the performance of the memory hierarchy.

Databases Distributed, Parallel, and Cluster Computing Data Structures and Algorithms Networking and Internet Architecture

Training Behavior of Sparse Neural Network Topologies

no code implementations30 Sep 2018 Simon Alford, Ryan Robinett, Lauren Milechin, Jeremy Kepner

We test pruning-based topologies, which are derived from an initially dense network whose connections are pruned, as well as RadiX-Nets, a class of network topologies with proven connectivity and sparsity properties.

Uncertainty Propagation in Deep Neural Networks Using Extended Kalman Filtering

no code implementations17 Sep 2018 Jessica S. Titensky, Hayden Jananthan, Jeremy Kepner

Extended Kalman Filtering (EKF) can be used to propagate and quantify input uncertainty through a Deep Neural Network (DNN) assuming mild hypotheses on the input distribution.

Neural Network Topologies for Sparse Training

no code implementations14 Sep 2018 Ryan A. Robinett, Jeremy Kepner

The sizes of deep neural networks (DNNs) are rapidly outgrowing the capacity of hardware to store and train them.

TabulaROSA: Tabular Operating System Architecture for Massively Parallel Heterogeneous Compute Engines

no code implementations14 Jul 2018 Jeremy Kepner, Ron Brightwell, Alan Edelman, Vijay Gadepally, Hayden Jananthan, Michael Jones, Sam Madden, Peter Michaleas, Hamed Okhravi, Kevin Pedretti, Albert Reuther, Thomas Sterling, Mike Stonebraker

In this context, an operating system can be viewed as software that brokers and tracks the resources of the compute engines and is akin to a database management system.

Distributed, Parallel, and Cluster Computing Databases Operating Systems Performance

Sparse Deep Neural Network Exact Solutions

no code implementations6 Jul 2018 Jeremy Kepner, Vijay Gadepally, Hayden Jananthan, Lauren Milechin, Sid Samsi

This work uses associative array DNNs to construct exact solutions and corresponding perturbation models to the rectified linear unit (ReLU) DNN equations that can be used to construct test vectors for sparse DNN implementations over various precisions.

Static Graph Challenge: Subgraph Isomorphism

no code implementations23 Aug 2017 Siddharth Samsi, Vijay Gadepally, Michael Hurley, Michael Jones, Edward Kao, Sanjeev Mohindra, Paul Monticciolo, Albert Reuther, Steven Smith, William Song, Diane Staheli, Jeremy Kepner

The proposed Subgraph Isomorphism Graph Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a graph challenge that is reflective of many real-world graph analytics processing systems.

Distributed, Parallel, and Cluster Computing Data Structures and Algorithms

Enabling Massive Deep Neural Networks with the GraphBLAS

no code implementations9 Aug 2017 Jeremy Kepner, Manoj Kumar, José Moreira, Pratap Pattnaik, Mauricio Serrano, Henry Tufo

The performance of the GraphBLAS implementation is measured relative to a standard dense linear algebra library implementation.

Math

Benchmarking Data Analysis and Machine Learning Applications on the Intel KNL Many-Core Processor

no code implementations12 Jul 2017 Chansup Byun, Jeremy Kepner, William Arcand, David Bestor, Bill Bergeron, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Andrew Prout, Antonio Rosa, Siddharth Samsi, Charles Yee, Albert Reuther

Thus, the performance of these applications on KNL systems is of high interest to LLSC users and the broader data analysis and machine learning communities.

Performance Instrumentation and Methods for Astrophysics Distributed, Parallel, and Cluster Computing Computational Physics

Non-Negative Matrix Factorization Test Cases

no code implementations30 Dec 2016 Connor Sell, Jeremy Kepner

Non-negative matrix factorization (NMF) is a prob- lem with many applications, ranging from facial recognition to document clustering.

Clustering

Large Enforced Sparse Non-Negative Matrix Factorization

no code implementations18 Oct 2015 Brendan Gavin, Vijay Gadepally, Jeremy Kepner

Non-negative matrix factorization (NMF) is a common method for generating topic models from text data.

Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.