1 code implementation • 5 Mar 2024 • Sayantan Choudhury, Nazarii Tupitsa, Nicolas Loizou, Samuel Horvath, Martin Takac, Eduard Gorbunov
Adaptive methods are extremely popular in machine learning as they make learning rate tuning less expensive.
1 code implementation • 5 Feb 2024 • Xingyu Qu, Samuel Horvath
Recent studies suggest that with sufficiently wide models, most SGD solutions can, up to permutation, converge into the same basin.
no code implementations • 25 Dec 2023 • Vincent Plassier, Nikita Kotelevskii, Aleksandr Rubashevskii, Fedor Noskov, Maksim Velikanov, Alexander Fishkov, Samuel Horvath, Martin Takac, Eric Moulines, Maxim Panov
Conformal Prediction (CP) stands out as a robust framework for uncertainty quantification, which is crucial for ensuring the reliability of predictions.
no code implementations • 28 Aug 2023 • Samuel Horvath, Stefanos Laskaridis, Shashank Rajput, Hongyi Wang
We further apply our technique on DNNs and empirically illustrate that Maestro enables the extraction of lower footprint models that preserve model performance while allowing for graceful accuracy-latency tradeoff for the deployment to devices of different capabilities.
no code implementations • 11 Apr 2023 • Xiangjian Hou, Sarit Khirirat, Mohammad Yaqub, Samuel Horvath
Our findings reveal a direct correlation between the optimal number of local steps, communication rounds, and a set of variables, e. g the DP privacy budget and other problem parameters, specifically in the context of strongly convex optimization.
1 code implementation • 7 Aug 2022 • Samuel Horvath, Malik Shahid Sultan, Hernando Ombao
It helps in answering the question whether one time series is helpful in forecasting.
2 code implementations • 14 Jul 2021 • Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horvath, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecny, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtarik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection.
2 code implementations • NeurIPS 2021 • Samuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos I. Venieris, Nicholas D. Lane
FjORD alleviates the problem of client system heterogeneity by tailoring the model width to the client's capabilities.
1 code implementation • NeurIPS 2021 • Wenlin Chen, Samuel Horvath, Peter Richtarik
We show that importance can be measured using only the norm of the update and give a formula for optimal client participation.
no code implementations • 27 May 2019 • Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, Peter Richtarik
Our technique is applied individually to all entries of the to-be-compressed update vector and works by randomized rounding to the nearest (negative or positive) power of two, which can be computed in a "natural" way by ignoring the mantissa.
no code implementations • 24 Jan 2019 • Dmitry Kovalev, Samuel Horvath, Peter Richtarik
A key structural element in both of these methods is the inclusion of an outer loop at the beginning of which a full pass over the training data is made in order to compute the exact gradient, which is then used to construct a variance-reduced estimator of the gradient.