no code implementations • 17 Jul 2024 • Baolin Li, Yankai Jiang, Vijay Gadepally, Devesh Tiwari
This survey offers a comprehensive overview of recent advancements in Large Language Model (LLM) serving systems, focusing on research since the year 2023.
no code implementations • 19 Mar 2024 • Baolin Li, Yankai Jiang, Vijay Gadepally, Devesh Tiwari
The rapid advancement of Generative Artificial Intelligence (GenAI) across diverse sectors raises significant environmental concerns, notably the carbon emissions from their cloud and high performance computing (HPC) infrastructure.
no code implementations • 25 Feb 2024 • Dan Zhao, Siddharth Samsi, Joseph McDonald, Baolin Li, David Bestor, Michael Jones, Devesh Tiwari, Vijay Gadepally
In this paper, we study the aggregate effect of power-capping GPUs on GPU temperature and power draw at a research supercomputing center.
1 code implementation • 13 Oct 2023 • Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner
Finally, a brief description of each of the new accelerators that have been added in the survey this year is included.
no code implementations • 4 Oct 2023 • Siddharth Samsi, Dan Zhao, Joseph McDonald, Baolin Li, Adam Michaleas, Michael Jones, William Bergeron, Jeremy Kepner, Devesh Tiwari, Vijay Gadepally
Large language models (LLMs) have exploded in popularity due to their new generative capabilities that go far beyond prior state-of-the-art.
no code implementations • 27 Jan 2023 • Dan Zhao, Nathan C. Frey, Joseph McDonald, Matthew Hubbell, David Bestor, Michael Jones, Andrew Prout, Vijay Gadepally, Siddharth Samsi
applications, we are sure to face an ever-mounting energy footprint to sustain these computational budgets, data storage needs, and more.
no code implementations • 12 Oct 2022 • Baolin Li, Siddharth Samsi, Vijay Gadepally, Devesh Tiwari
Online inference is becoming a key service product for many businesses, deployed in cloud platforms to meet customer demands.
no code implementations • 12 Sep 2022 • Matthew L. Weiss, Joseph McDonald, David Bestor, Charles Yee, Daniel Edelman, Michael Jones, Andrew Prout, Andrew Bowne, Lindsey McEvoy, Vijay Gadepally, Siddharth Samsi
Our best performing models achieve a classification accuracy greater than 95%, outperforming previous approaches to multi-channel time series classification with the MIT SuperCloud Dataset by 5%.
no code implementations • 23 Jul 2022 • Baolin Li, Rohan Basu Roy, Tirthak Patel, Vijay Gadepally, Karen Gettings, Devesh Tiwari
Deep learning model inference is a key service in many businesses and scientific discovery processes.
1 code implementation • 14 Jul 2022 • Vijay Gadepally, Gregory Angelides, Andrei Barbu, Andrew Bowne, Laura J. Brattain, Tamara Broderick, Armando Cabrera, Glenn Carl, Ronisha Carter, Miriam Cha, Emilie Cowen, Jesse Cummings, Bill Freeman, James Glass, Sam Goldberg, Mark Hamilton, Thomas Heldt, Kuan Wei Huang, Phillip Isola, Boris Katz, Jamie Koerner, Yen-Chen Lin, David Mayo, Kyle McAlpin, Taylor Perron, Jean Piou, Hrishikesh M. Rao, Hayley Reynolds, Kaira Samuel, Siddharth Samsi, Morgan Schmidt, Leslie Shing, Olga Simek, Brandon Swenson, Vivienne Sze, Jonathan Taylor, Paul Tylkin, Mark Veillette, Matthew L Weiss, Allan Wollaber, Sophia Yuditskaya, Jeremy Kepner
Through a series of federal initiatives and orders, the U. S. Government has been making a concerted effort to ensure American leadership in AI.
no code implementations • Findings (NAACL) 2022 • Joseph McDonald, Baolin Li, Nathan Frey, Devesh Tiwari, Vijay Gadepally, Siddharth Samsi
In particular, we focus on techniques to measure energy usage and different hardware and datacenter-oriented settings that can be tuned to reduce energy consumption for training and inference for language models.
no code implementations • 12 Apr 2022 • Benny J. Tang, Qiqi Chen, Matthew L. Weiss, Nathan Frey, Joseph McDonald, David Bestor, Charles Yee, William Arcand, Chansup Byun, Daniel Edelman, Matthew Hubbell, Michael Jones, Jeremy Kepner, Anna Klein, Adam Michaleas, Peter Michaleas, Lauren Milechin, Julia Mullen, Andrew Prout, Albert Reuther, Antonio Rosa, Andrew Bowne, Lindsey McEvoy, Baolin Li, Devesh Tiwari, Vijay Gadepally, Siddharth Samsi
We introduce a labelled dataset that can be used to develop new approaches to workload classification and present initial results based on existing approaches.
3 code implementations • 28 Jan 2022 • Nathan C. Frey, Vijay Gadepally, Bharath Ramsundar
We propose a framework using normalizing-flow based models, SELF-Referencing Embedded Strings, and multi-objective optimization that efficiently generates small molecules.
no code implementations • 28 Jan 2022 • Nathan C. Frey, Baolin Li, Joseph McDonald, Dan Zhao, Michael Jones, David Bestor, Devesh Tiwari, Vijay Gadepally, Siddharth Samsi
Deep learning (DL) workflows demand an ever-increasing budget of compute and energy in order to achieve outsized gains.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Nathan C. Frey, Siddharth Samsi, Bharath Ramsundar, Connor W. Coley, Vijay Gadepally
Artificial intelligence has not yet revolutionized the design of materials and molecules.
1 code implementation • NeurIPS Workshop AI4Scien 2021 • Nathan C. Frey, Siddharth Samsi, Joseph McDonald, Lin Li, Connor W. Coley, Vijay Gadepally
Deep learning in molecular and materials sciences is limited by the lack of integration between applied science, artificial intelligence, and high-performance computing.
no code implementations • 13 Nov 2021 • Matthew L. Weiss, Nathan C. Frey, Siddharth Samsi, Randy C. Paffenroth, Vijay Gadepally
Traditional frequency based projection filters, or projection operators (PO), separate signal and noise through a series of transformations which remove frequencies where noise is present.
1 code implementation • 18 Sep 2021 • Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner
Over the past several years, new machine learning accelerators were being announced and released every month for a variety of applications from speech recognition, video object detection, assisted driving, and many data center applications.
no code implementations • 25 Aug 2021 • Kaira Samuel, Vijay Gadepally, David Jacobs, Michael Jones, Kyle McAlpin, Kyle Palko, Ben Paulk, Sid Samsi, Ho Chit Siu, Charles Yee, Jeremy Kepner
The Maneuver Identification Challenge hosted at maneuver-id. mit. edu provides thousands of trajectories collected from pilots practicing in flight simulators, descriptions of maneuvers, and examples of these maneuvers performed by experienced pilots.
no code implementations • 4 Aug 2021 • Siddharth Samsi, Matthew L Weiss, David Bestor, Baolin Li, Michael Jones, Albert Reuther, Daniel Edelman, William Arcand, Chansup Byun, John Holodnack, Matthew Hubbell, Jeremy Kepner, Anna Klein, Joseph McDonald, Adam Michaleas, Peter Michaleas, Lauren Milechin, Julia Mullen, Charles Yee, Benjamin Price, Andrew Prout, Antonio Rosa, Allan Vanterpool, Lindsey McEvoy, Anson Cheng, Devesh Tiwari, Vijay Gadepally
In this paper we introduce the MIT Supercloud Dataset which aims to foster innovative AI/ML approaches to the analysis of large scale HPC and datacenter/cloud operations.
no code implementations • 28 Mar 2021 • Jeremy Kepner, Timothy Davis, Vijay Gadepally, Hayden Jananthan, Lauren Milechin
The GraphBLAS standard currently supports hypergraphs, hypersparse matrices, the mathematics required for semilinks, and seamlessly performs graph, network, and matrix operations.
no code implementations • 2 Mar 2021 • El Kindi Rezig, Michael Cafarella, Vijay Gadepally
In this report, we highlight a number of tools that can be used to simplify data integration and preparation steps.
1 code implementation • 13 Oct 2020 • Matthew Hutchinson, Vijay Gadepally
Many believe that the successes of deep learning on image understanding problems can be replicated in the realm of video understanding.
no code implementations • 1 Sep 2020 • Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner
New machine learning accelerators are being announced and released each month for a variety of applications from speech recognition, video object detection, assisted driving, and many data center applications.
no code implementations • 20 Aug 2020 • Matthew Hutchinson, Siddharth Samsi, William Arcand, David Bestor, Bill Bergeron, Chansup Byun, Micheal Houle, Matthew Hubbell, Micheal Jones, Jeremy Kepner, Andrew Kirby, Peter Michaleas, Lauren Milechin, Julie Mullen, Andrew Prout, Antonio Rosa, Albert Reuther, Charles Yee, Vijay Gadepally
Over the past few years, there has been significant interest in video action recognition systems and models.
no code implementations • 18 Aug 2020 • Siddharth Samsi, Andrew Prout, Michael Jones, Andrew Kirby, Bill Arcand, Bill Bergeron, David Bestor, Chansup Byun, Vijay Gadepally, Michael Houle, Matthew Hubbell, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Antonio Rosa, Charles Yee, Albert Reuther, Jeremy Kepner
The large computational requirements for training deep models have necessitated the development of new methods for faster training.
no code implementations • 14 Jul 2020 • Andrew C. Kirby, Siddharth Samsi, Michael Jones, Albert Reuther, Jeremy Kepner, Vijay Gadepally
A Multigrid Full Approximation Storage algorithm for solving Deep Residual Networks is developed to enable neural network parallelized layer-wise training and concurrent computational kernel execution on GPUs.
no code implementations • 25 Mar 2020 • Jeremy Kepner, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Albert Reuther, Ryan Robinett, Sid Samsi
The Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems.
no code implementations • 18 Mar 2020 • Siddharth Samsi, Jeremy Kepner, Vijay Gadepally, Michael Hurley, Michael Jones, Edward Kao, Sanjeev Mohindra, Albert Reuther, Steven Smith, William Song, Diane Staheli, Paul Monticciolo
In 2017, 2018, and 2019 many triangle counting submissions were received from a wide range of authors and organizations.
Distributed, Parallel, and Cluster Computing Performance
no code implementations • 27 Nov 2019 • Mihailo Isakov, Vijay Gadepally, Karen M. Gettings, Michel A. Kinsy
Deep Neural Network (DNN) workloads are quickly moving from datacenters onto edge devices, for latency, privacy, or energy reasons.
no code implementations • 2 Sep 2019 • Jeremy Kepner, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Ryan Robinett, Sid Samsi
The Sparse DNN Challenge is based on a mathematically well-defined DNN inference computation and can be implemented in any programming environment.
no code implementations • 29 Aug 2019 • Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner
Advances in multicore processors and accelerators have opened the flood gates to greater exploration and application of machine learning techniques to a variety of applications.
Performance B.8; C.4
no code implementations • 20 Aug 2019 • Andrew Prout, William Arcand, David Bestor, Bill Bergeron, Chansup Byun, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Antonio Rosa, Siddharth Samsi, Charles Yee, Albert Reuther, Jeremy Kepner
Federated authentication can drastically reduce the overhead of basic account maintenance while simultaneously improving overall system security.
Distributed, Parallel, and Cluster Computing Cryptography and Security
no code implementations • 6 Jul 2019 • Jeremy Kepner, Vijay Gadepally, Lauren Milechin, Siddharth Samsi, William Arcand, David Bestor, William Bergeron, Chansup Byun, Matthew Hubbell, Michael Houle, Michael Jones, Anne Klein, Peter Michaleas, Julie Mullen, Andrew Prout, Antonio Rosa, Charles Yee, Albert Reuther
This work describes the design and performance optimization of an implementation of hierarchical associative arrays that reduces memory pressure and dramatically increases the update rate into an associative array.
no code implementations • 8 May 2019 • Vijay Gadepally, Justin Goodwin, Jeremy Kepner, Albert Reuther, Hayley Reynolds, Siddharth Samsi, Jonathan Su, David Martinez
Artificial Intelligence (AI) has the opportunity to revolutionize the way the United States Department of Defense (DoD) and Intelligence Community (IC) address the challenges of evolving threats, data deluge, and rapid courses of action.
no code implementations • 3 Feb 2019 • Jeremy Kepner, Vijay Gadepally, Lauren Milechin, Siddharth Samsi, William Arcand, David Bestor, William Bergeron, Chansup Byun, Matthew Hubbell, Micheal Houle, Micheal Jones, Anne Klein, Peter Michaleas, Julie Mullen, Andrew Prout, Antonio Rosa, Charles Yee, Albert Reuther
Streaming updates to a large associative array requires a hierarchical implementation to optimize the performance of the memory hierarchy.
Databases Distributed, Parallel, and Cluster Computing Data Structures and Algorithms Networking and Internet Architecture
no code implementations • 14 Jul 2018 • Jeremy Kepner, Ron Brightwell, Alan Edelman, Vijay Gadepally, Hayden Jananthan, Michael Jones, Sam Madden, Peter Michaleas, Hamed Okhravi, Kevin Pedretti, Albert Reuther, Thomas Sterling, Mike Stonebraker
In this context, an operating system can be viewed as software that brokers and tracks the resources of the compute engines and is akin to a database management system.
Distributed, Parallel, and Cluster Computing Databases Operating Systems Performance
no code implementations • 6 Jul 2018 • Jeremy Kepner, Vijay Gadepally, Hayden Jananthan, Lauren Milechin, Sid Samsi
This work uses associative array DNNs to construct exact solutions and corresponding perturbation models to the rectified linear unit (ReLU) DNN equations that can be used to construct test vectors for sparse DNN implementations over various precisions.
no code implementations • 23 Aug 2017 • Siddharth Samsi, Vijay Gadepally, Michael Hurley, Michael Jones, Edward Kao, Sanjeev Mohindra, Paul Monticciolo, Albert Reuther, Steven Smith, William Song, Diane Staheli, Jeremy Kepner
The proposed Subgraph Isomorphism Graph Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a graph challenge that is reflective of many real-world graph analytics processing systems.
Distributed, Parallel, and Cluster Computing Data Structures and Algorithms
no code implementations • 12 Jul 2017 • Chansup Byun, Jeremy Kepner, William Arcand, David Bestor, Bill Bergeron, Vijay Gadepally, Michael Houle, Matthew Hubbell, Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen, Andrew Prout, Antonio Rosa, Siddharth Samsi, Charles Yee, Albert Reuther
Thus, the performance of these applications on KNL systems is of high interest to LLSC users and the broader data analysis and machine learning communities.
Performance Instrumentation and Methods for Astrophysics Distributed, Parallel, and Cluster Computing Computational Physics
no code implementations • 11 Jul 2016 • Vijay Gadepally, Ashok Krishnamurthy
The long term driver behavior estimation system involves an extended HSS+HMM structure that is capable of including external information in the estimation process.
no code implementations • 18 Oct 2015 • Brendan Gavin, Vijay Gadepally, Jeremy Kepner
Non-negative matrix factorization (NMF) is a common method for generating topic models from text data.