no code implementations • 13 Feb 2024 • Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury
The key premise of federated learning (FL) is to train ML models across a diverse set of data-owners (clients), without exchanging local data.
no code implementations • 6 Jan 2024 • Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury
Parameter-efficient tuning (PET) methods such as LoRA, Adapter, and Visual Prompt Tuning (VPT) have found success in enabling adaptation to new domains by tuning small modules within a transformer model.
no code implementations • 23 Dec 2021 • Irem Ergun, Hasin Us Sami, Basak Guler
Secure aggregation is a popular protocol in privacy-preserving federated learning, which allows model aggregation without revealing the individual models in the clear.
no code implementations • 29 Sep 2021 • Jinhyun So, Chaoyang He, Chien-Sheng Yang, Songze Li, Qian Yu, Ramy E. Ali, Basak Guler, Salman Avestimehr
We also demonstrate that, unlike existing schemes, LightSecAgg can be applied to secure aggregation in the asynchronous FL setting.
no code implementations • 7 Jun 2021 • Jinhyun So, Ramy E. Ali, Basak Guler, Jiantao Jiao, Salman Avestimehr
In fact, we show that the conventional random user selection strategies in FL lead to leaking users' individual models within number of rounds that is linear in the number of users.
no code implementations • 22 Feb 2021 • Basak Guler, Aylin Yener
Potential environmental impact of machine learning by large-scale wireless networks is a major challenge for the sustainability of future smart ecosystems.
no code implementations • 10 Feb 2021 • Basak Guler, Aylin Yener
This paper provides a first study of utilizing energy harvesting for sustainable machine learning in distributed networks.
no code implementations • NeurIPS 2020 • Jinhyun So, Basak Guler, A. Salman Avestimehr
We consider a collaborative learning scenario in which multiple data-owners wish to jointly train a logistic regression model, while keeping their individual datasets private from the other parties.
no code implementations • 21 Jul 2020 • Jinhyun So, Basak Guler, A. Salman Avestimehr
This presents a major challenge for the resilience of the model against adversarial (Byzantine) users, who can manipulate the global model by modifying their local models or datasets.
no code implementations • 11 Feb 2020 • Jinhyun So, Basak Guler, A. Salman Avestimehr
A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users.
no code implementations • 2 Feb 2019 • Jinhyun So, Basak Guler, A. Salman Avestimehr
How to train a machine learning model while keeping the data private and secure?