Search Results for author: Nikita Ivkin

Found 7 papers, 2 papers with code

Communication-Efficient Federated Learning with Sketching

no code implementations ICML 2020 Daniel Rothchild, Ashwinee Panda, Enayat Ullah, Nikita Ivkin, Vladimir Braverman, Joseph Gonzalez, Ion Stoica, Raman Arora

A key insight in the design of FedSketchedSGD is that, because the Count Sketch is linear, momentum and error accumulation can both be carried out within the sketch.

Federated Learning

Sketch and Scale: Geo-distributed tSNE and UMAP

no code implementations11 Nov 2020 Viska Wei, Nikita Ivkin, Vladimir Braverman, Alexander Szalay

Running machine learning analytics over geographically distributed datasets is a rapidly arising problem in the world of data management policies ensuring privacy and data security.

Management

Practical and sample efficient zero-shot HPO

no code implementations27 Jul 2020 Fela Winkelmolen, Nikita Ivkin, H. Furkan Bozkurt, Zohar Karnin

Zero-shot hyperparameter optimization (HPO) is a simple yet effective use of transfer learning for constructing a small list of hyperparameter (HP) configurations that complement each other.

Hyperparameter Optimization Transfer Learning

FetchSGD: Communication-Efficient Federated Learning with Sketching

no code implementations15 Jul 2020 Daniel Rothchild, Ashwinee Panda, Enayat Ullah, Nikita Ivkin, Ion Stoica, Vladimir Braverman, Joseph Gonzalez, Raman Arora

A key insight in the design of FetchSGD is that, because the Count Sketch is linear, momentum and error accumulation can both be carried out within the sketch.

Federated Learning

Streaming Quantiles Algorithms with Small Space and Update Time

1 code implementation29 Jun 2019 Nikita Ivkin, Edo Liberty, Kevin Lang, Zohar Karnin, Vladimir Braverman

Approximating quantiles and distributions over streaming data has been studied for roughly two decades now.

Communication-efficient distributed SGD with Sketching

2 code implementations NeurIPS 2019 Nikita Ivkin, Daniel Rothchild, Enayat Ullah, Vladimir Braverman, Ion Stoica, Raman Arora

Large-scale distributed training of neural networks is often limited by network bandwidth, wherein the communication time overwhelms the local computation time.

Cannot find the paper you are looking for? You can Submit a new open access paper.