Search Results for author: Shahar Azulay

Found 2 papers, 0 papers with code

On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent

no code implementations19 Feb 2021 Shahar Azulay, Edward Moroshko, Mor Shpigel Nacson, Blake Woodworth, Nathan Srebro, Amir Globerson, Daniel Soudry

Recent work has highlighted the role of initialization scale in determining the structure of the solutions that gradient methods converge to.

Inductive Bias

Holdout SGD: Byzantine Tolerant Federated Learning

no code implementations11 Aug 2020 Shahar Azulay, Lior Raz, Amir Globerson, Tomer Koren, Yehuda Afek

HoldOut SGD first randomly selects a set of workers that use their private data in order to propose gradient updates.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.