Search Results for author: Slavomir Hanzely

Found 2 papers, 0 papers with code

ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computations

no code implementations29 Sep 2021 Zhize Li, Slavomir Hanzely, Peter Richtárik

Avoiding any full gradient computations (which are time-consuming steps) is important in many applications as the number of data samples $n$ usually is very large.

Federated Learning

Adaptive Learning of the Optimal Batch Size of SGD

no code implementations3 May 2020 Motasem Alfarra, Slavomir Hanzely, Alyazeed Albasyoni, Bernard Ghanem, Peter Richtarik

Recent advances in the theoretical understanding of SGD led to a formula for the optimal batch size minimizing the number of effective data passes, i. e., the number of iterations times the batch size.

Cannot find the paper you are looking for? You can Submit a new open access paper.