Search Results for author: Max Lam

Found 3 papers, 2 papers with code

Gradient Diversity: a Key Ingredient for Scalable Distributed Learning

no code implementations18 Jun 2017 Dong Yin, Ashwin Pananjady, Max Lam, Dimitris Papailiopoulos, Kannan Ramchandran, Peter Bartlett

It has been experimentally observed that distributed implementations of mini-batch stochastic gradient descent (SGD) algorithms exhibit speedup saturation and decaying generalization ability beyond a particular batch-size.

Quantization

Benchmarking TinyML Systems: Challenges and Direction

2 code implementations10 Mar 2020 Colby R. Banbury, Vijay Janapa Reddi, Max Lam, William Fu, Amin Fazel, Jeremy Holleman, Xinyuan Huang, Robert Hurtado, David Kanter, Anton Lokhmotov, David Patterson, Danilo Pau, Jae-sun Seo, Jeff Sieracki, Urmish Thakker, Marian Verhelst, Poonam Yadav

In this position paper, we present the current landscape of TinyML and discuss the challenges and direction towards developing a fair and useful hardware benchmark for TinyML workloads.

Benchmarking Position

Tabula: Efficiently Computing Nonlinear Activation Functions for Private Neural Network Inference

1 code implementation29 Sep 2021 Max Lam, Michael Mitzenmacher, Vijay Janapa Reddi, Gu-Yeon Wei, David Brooks

Multiparty computation approaches to private neural network inference require significant communication between server and client, incur tremendous runtime penalties, and cost massive storage overheads.

Cannot find the paper you are looking for? You can Submit a new open access paper.