no code implementations • ICML 2020 • Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin, William Leiserson, Sage Moore, Nir Shavit, Dan Alistarh
In this paper, we present an in-depth analysis of methods for maximizing the sparsity of the activations in a trained neural network, and show that, when coupled with an efficient sparse-input convolution algorithm, we can leverage this sparsity for significant performance gains.
no code implementations • 4 Mar 2021 • Rati Gelashvili, Lefteris Kokoris-Kogias, Alexander Spiegelman, Zhuolun Xiang
The state-of-the-art partially synchronous BFT SMR protocols provide optimal linear communication cost per decision under synchrony and good leaders, but lose liveness under asynchrony.
Distributed, Parallel, and Cluster Computing Cryptography and Security
no code implementations • 17 Feb 2021 • Dan Alistarh, Rati Gelashvili, Joel Rybicki
Let $G$ be a graph on $n$ nodes.
Distributed, Parallel, and Cluster Computing Data Structures and Algorithms
no code implementations • 4 Dec 2019 • Rati Gelashvili, Nir Shavit, Aleksandar Zlateski
Fast convolutions via transforms, either Winograd or FFT, had emerged as a preferred way of performing the computation of convolutional layers, as it greatly reduces the number of required operations.