Search Results for author: Mattan Erez

Found 8 papers, 2 papers with code

Artemis: HE-Aware Training for Efficient Privacy-Preserving Machine Learning

no code implementations2 Oct 2023 Yeonsoo Jeon, Mattan Erez, Michael Orshansky

Privacy-Preserving ML (PPML) based on Homomorphic Encryption (HE) is a promising foundational privacy technology.

Model Compression Privacy Preserving

FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN Model Training

no code implementations27 Apr 2020 Sangkug Lym, Mattan Erez

Based on our evaluation, FlexSA with the proposed compilation heuristic improves compute resource utilization of pruning and training modern CNN models by 37% compared to a conventional training accelerator with a large systolic array.

Buddy Compression: Enabling Larger Memory for Deep Learning and HPC Workloads on GPUs

no code implementations6 Mar 2019 Esha Choukse, Michael Sullivan, Mike O'Connor, Mattan Erez, Jeff Pool, David Nellans, Steve Keckler

However, GPU device memory tends to be relatively small and the memory capacity can not be increased by the user.

Hardware Architecture

PruneTrain: Fast Neural Network Training by Dynamic Sparse Model Reconfiguration

1 code implementation26 Jan 2019 Sangkug Lym, Esha Choukse, Siavash Zangeneh, Wei Wen, Sujay Sanghavi, Mattan Erez

State-of-the-art convolutional neural networks (CNNs) used in vision applications have large models with numerous weights.

Mini-batch Serialization: CNN Training with Inter-layer Data Reuse

1 code implementation30 Sep 2018 Sangkug Lym, Armand Behroozi, Wei Wen, Ge Li, Yongkee Kwon, Mattan Erez

Training convolutional neural networks (CNNs) requires intense computations and high memory bandwidth.

Cannot find the paper you are looking for? You can Submit a new open access paper.