Search Results for author: Gu-Yeon Wei

Found 39 papers, 11 papers with code

Carbon Connect: An Ecosystem for Sustainable Computing

no code implementations22 May 2024 Benjamin C. Lee, David Brooks, Arthur van Benthem, Udit Gupta, Gage Hills, Vincent Liu, Benjamin Pierce, Christopher Stewart, Emma Strubell, Gu-Yeon Wei, Adam Wierman, Yuan YAO, Minlan Yu

For embodied carbon, we must re-think conventional design strategies -- over-provisioned monolithic servers, frequent hardware refresh cycles, custom silicon -- and adopt life-cycle design strategies that more effectively reduce, reuse and recycle hardware at scale.

Management

Is Flash Attention Stable?

no code implementations5 May 2024 Alicia Golden, Samuel Hsia, Fei Sun, Bilge Acun, Basil Hosmer, Yejin Lee, Zachary DeVito, Jeff Johnson, Gu-Yeon Wei, David Brooks, Carole-Jean Wu

Training large-scale machine learning models poses distinct system challenges, given both the size and complexity of today's workloads.

Generative AI Beyond LLMs: System Implications of Multi-Modal Generation

no code implementations22 Dec 2023 Alicia Golden, Samuel Hsia, Fei Sun, Bilge Acun, Basil Hosmer, Yejin Lee, Zachary DeVito, Jeff Johnson, Gu-Yeon Wei, David Brooks, Carole-Jean Wu

As the development of large-scale Generative AI models evolve beyond text (1D) generation to include image (2D) and video (3D) generation, processing spatial and temporal information presents unique challenges to quality, performance, and efficiency.

3D Generation

Hardware Resilience Properties of Text-Guided Image Classifiers

1 code implementation NeurIPS 2023 Syed Talal Wasim, Kabila Haile Soboka, Abdulrahman Mahmoud, Salman Khan, David Brooks, Gu-Yeon Wei

This paper presents a novel method to enhance the reliability of image classification models during deployment in the face of transient hardware errors.

Classification Image Classification

MAD Max Beyond Single-Node: Enabling Large Machine Learning Model Acceleration on Distributed Systems

no code implementations4 Oct 2023 Samuel Hsia, Alicia Golden, Bilge Acun, Newsha Ardalani, Zachary DeVito, Gu-Yeon Wei, David Brooks, Carole-Jean Wu

Training and deploying large-scale machine learning models is time-consuming, requires significant distributed computing infrastructures, and incurs high operational costs.

Distributed Computing

Guess & Sketch: Language Model Guided Transpilation

no code implementations25 Sep 2023 Celine Lee, Abdulrahman Mahmoud, Michal Kurek, Simone Campanoni, David Brooks, Stephen Chong, Gu-Yeon Wei, Alexander M. Rush

In this work, we leverage the strengths of LMs and symbolic solvers in a neurosymbolic approach to learned transpilation for assembly code.

Language Modelling Translation

INT2.1: Towards Fine-Tunable Quantized Large Language Models with Error Correction through Low-Rank Adaptation

1 code implementation13 Jun 2023 Yuji Chai, John Gkountouras, Glenn G. Ko, David Brooks, Gu-Yeon Wei

We introduce a method that dramatically reduces fine-tuning VRAM requirements and rectifies quantization errors in quantized Large Language Models.

Language Modelling Large Language Model +1

CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning

no code implementations4 May 2023 Sai Qian Zhang, Thierry Tambe, Nestor Cuevas, Gu-Yeon Wei, David Brooks

To minimize the occurrence of expensive eDRAM refresh operations, it is beneficial to shorten the lifetime of stored data during the training process.

MP-Rec: Hardware-Software Co-Design to Enable Multi-Path Recommendation

no code implementations21 Feb 2023 Samuel Hsia, Udit Gupta, Bilge Acun, Newsha Ardalani, Pan Zhong, Gu-Yeon Wei, David Brooks, Carole-Jean Wu

Based on our characterization of various embedding representations, we propose a hybrid embedding representation that achieves higher quality embeddings at the cost of increased memory and compute requirements.

Recommendation Systems

PerfSAGE: Generalized Inference Performance Predictor for Arbitrary Deep Learning Models on Edge Devices

no code implementations26 Jan 2023 Yuji Chai, Devashree Tripathy, Chuteng Zhou, Dibakar Gope, Igor Fedorov, Ramon Matas, David Brooks, Gu-Yeon Wei, Paul Whatmough

The ability to accurately predict deep neural network (DNN) inference performance metrics, such as latency, power, and memory footprint, for an arbitrary DNN on a target hardware platform is essential to the design of DNN based models.

Graph Neural Network

GPU-based Private Information Retrieval for On-Device Machine Learning Inference

1 code implementation26 Jan 2023 Maximilian Lam, Jeff Johnson, Wenjie Xiong, Kiwan Maeng, Udit Gupta, Yang Li, Liangzhen Lai, Ilias Leontiadis, Minsoo Rhu, Hsien-Hsin S. Lee, Vijay Janapa Reddi, Gu-Yeon Wei, David Brooks, G. Edward Suh

Together, for various on-device ML applications such as recommendation and language modeling, our system on a single V100 GPU can serve up to $100, 000$ queries per second -- a $>100 \times$ throughput improvement over a CPU-based baseline -- while maintaining model accuracy.

Information Retrieval Language Modelling +1

Architectural Implications of Embedding Dimension during GCN on CPU and GPU

no code implementations1 Dec 2022 Matthew Adiletta, David Brooks, Gu-Yeon Wei

Graph Neural Networks (GNNs) are a class of neural networks designed to extract information from the graphical structure of data.

Graph Learning

SpeedLimit: Neural Architecture Search for Quantized Transformer Models

no code implementations25 Sep 2022 Yuji Chai, Luke Bailey, Yunho Jin, Matthew Karle, Glenn G. Ko, David Brooks, Gu-Yeon Wei, H. T. Kung

While research in the field of transformer models has primarily focused on enhancing performance metrics such as accuracy and perplexity, practical applications in industry often necessitate a rigorous consideration of inference latency constraints.

Neural Architecture Search Quantization +1

Tabula: Efficiently Computing Nonlinear Activation Functions for Secure Neural Network Inference

no code implementations5 Mar 2022 Maximilian Lam, Michael Mitzenmacher, Vijay Janapa Reddi, Gu-Yeon Wei, David Brooks

This enables an online phase where securely computing the result of a nonlinear function requires just a single round of communication, with communication cost equal to twice the number of bits of the input to the nonlinear function.

Quantization

Tabula: Efficiently Computing Nonlinear Activation Functions for Private Neural Network Inference

1 code implementation29 Sep 2021 Max Lam, Michael Mitzenmacher, Vijay Janapa Reddi, Gu-Yeon Wei, David Brooks

Multiparty computation approaches to private neural network inference require significant communication between server and client, incur tremendous runtime penalties, and cost massive storage overheads.

AutoPilot: Automating SoC Design Space Exploration for SWaP Constrained Autonomous UAVs

no code implementations5 Feb 2021 Srivatsan Krishnan, Zishen Wan, Kshitij Bhardwaj, Paul Whatmough, Aleksandra Faust, Sabrina Neuman, Gu-Yeon Wei, David Brooks, Vijay Janapa Reddi

Balancing a computing system for a UAV requires considering both the cyber (e. g., sensor rate, compute performance) and physical (e. g., payload weight) characteristics that affect overall performance.

Bayesian Optimization BIG-bench Machine Learning +1

RecSSD: Near Data Processing for Solid State Drive Based Recommendation Inference

no code implementations29 Jan 2021 Mark Wilkening, Udit Gupta, Samuel Hsia, Caroline Trippel, Carole-Jean Wu, David Brooks, Gu-Yeon Wei

Neural personalized recommendation models are used across a wide variety of datacenter applications including search, social media, and entertainment.

CHIPKIT: An agile, reusable open-source framework for rapid test chip development

2 code implementations13 Jan 2020 Paul Whatmough, Marco Donato, Glenn Ko, Sae-Kyu Lee, David Brooks, Gu-Yeon Wei

The current trend for domain-specific architectures (DSAs) has led to renewed interest in research test chips to demonstrate new specialized hardware.

Hardware Architecture

DeepRecSys: A System for Optimizing End-To-End At-scale Neural Recommendation Inference

no code implementations8 Jan 2020 Udit Gupta, Samuel Hsia, Vikram Saraph, Xiaodong Wang, Brandon Reagen, Gu-Yeon Wei, Hsien-Hsin S. Lee, David Brooks, Carole-Jean Wu

Neural personalized recommendation is the corner-stone of a wide collection of cloud services and products, constituting significant compute demand of the cloud infrastructure.

Distributed, Parallel, and Cluster Computing

A binary-activation, multi-level weight RNN and training algorithm for ADC-/DAC-free and noise-resilient processing-in-memory inference with eNVM

no code implementations30 Nov 2019 Siming Ma, David Brooks, Gu-Yeon Wei

We propose a new algorithm for training neural networks with binary activations and multi-level weights, which enables efficient processing-in-memory circuits with embedded nonvolatile memories (eNVM).

Quantization

AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference

no code implementations29 Sep 2019 Thierry Tambe, En-Yu Yang, Zishen Wan, Yuntian Deng, Vijay Janapa Reddi, Alexander Rush, David Brooks, Gu-Yeon Wei

Conventional hardware-friendly quantization methods, such as fixed-point or integer, tend to perform poorly at very low word sizes as their shrinking dynamic ranges cannot adequately capture the wide data distributions commonly seen in sequence transduction models.

Quantization

Decoupling Weight Regularization from Batch Size for Model Compression

no code implementations25 Sep 2019 Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Yongkweon Jeon, Baeseong Park, Jeongin Yun, Gu-Yeon Wei

Using various models, we show that simple weight updates to comply with compression formats along with long NR period is enough to achieve high compression ratio and model accuracy.

Model Compression

Network Pruning for Low-Rank Binary Index

no code implementations25 Sep 2019 Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Parichay Kapoor, Gu-Yeon Wei

In this paper, we propose a new network pruning technique that generates a low-rank binary index matrix to compress index data significantly.

Model Compression Network Pruning +1

MASR: A Modular Accelerator for Sparse RNNs

no code implementations23 Aug 2019 Udit Gupta, Brandon Reagen, Lillian Pentecost, Marco Donato, Thierry Tambe, Alexander M. Rush, Gu-Yeon Wei, David Brooks

The architecture is enhanced by a series of dynamic activation optimizations that enable compact storage, ensure no energy is wasted computing null operations, and maintain high MAC utilization for highly parallel accelerator designs.

speech-recognition Speech Recognition

Benchmarking TPU, GPU, and CPU Platforms for Deep Learning

1 code implementation24 Jul 2019 Yu Emma Wang, Gu-Yeon Wei, David Brooks

Training deep learning models is compute-intensive and there is an industry-wide trend towards hardware specialization to improve performance.

Benchmarking Deep Learning

Structured Compression by Weight Encryption for Unstructured Pruning and Quantization

no code implementations CVPR 2020 Se Jung Kwon, Dongsoo Lee, Byeongwook Kim, Parichay Kapoor, Baeseong Park, Gu-Yeon Wei

Model compression techniques, such as pruning and quantization, are becoming increasingly important to reduce the memory footprints and the amount of computations.

Model Compression Quantization

Learning Low-Rank Approximation for CNNs

no code implementations24 May 2019 Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Gu-Yeon Wei

Low-rank approximation is an effective model compression technique to not only reduce parameter storage requirements, but to also reduce computations.

Model Compression

Network Pruning for Low-Rank Binary Indexing

no code implementations14 May 2019 Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Parichay Kapoor, Gu-Yeon Wei

Pruning is an efficient model compression technique to remove redundancy in the connectivity of deep neural networks (DNNs).

Model Compression Network Pruning

Cloud No Longer a Silver Bullet, Edge to the Rescue

no code implementations15 Feb 2018 Yuhao Zhu, Gu-Yeon Wei, David Brooks

This paper takes the position that, while cognitive computing today relies heavily on the cloud, we will soon see a paradigm shift where cognitive computing primarily happens on network edges.

Position

Fathom: Reference Workloads for Modern Deep Learning Methods

1 code implementation23 Aug 2016 Robert Adolf, Saketh Rama, Brandon Reagen, Gu-Yeon Wei, David Brooks

Fathom has been released online, and this paper focuses on understanding the fundamental performance characteristics of each model.

Deep Learning Specificity

Cannot find the paper you are looking for? You can Submit a new open access paper.