no code implementations • 21 Nov 2024 • Shourya Bose, Yijiang Li, Amy Van Sant, Yu Zhang, Kibaek Kim
The impact of increasing dataset heterogeneity in time series forecasting, while keeping size and model constant, is understudied.
no code implementations • 18 Oct 2024 • Guangji Bai, Yijiang Li, Zilinghan Li, Liang Zhao, Kibaek Kim
Large Language Models (LLMs) achieve state-of-the-art performance but are challenging to deploy due to their high computational and storage demands.
1 code implementation • 17 Sep 2024 • Zilinghan Li, Shilan He, Ze Yang, Minseok Ryu, Kibaek Kim, Ravi Madduri
Federated learning (FL) is a distributed machine learning paradigm enabling collaborative model training while preserving data privacy.
no code implementations • 12 Jul 2024 • Hendrik F. Hamann, Thomas Brunschwiler, Blazhe Gjorgiev, Leonardo S. A. Martins, Alban Puech, Anna Varbella, Jonas Weiss, Juan Bernabe-Moreno, Alexandre Blondin Massé, Seong Choi, Ian Foster, Bri-Mathias Hodge, Rishabh Jain, Kibaek Kim, Vincent Mai, François Mirallès, Martin De Montigny, Octavio Ramos-Leaños, Hussein Suprême, Le Xie, El-Nasser S. Youssef, Arnaud Zinflou, Alexander J. Belyi, Ricardo J. Bessa, Bishnu Prasad Bhattarai, Johannes Schmude, Stanislav Sobolevsky
Foundation models (FMs) currently dominate news headlines.
no code implementations • 16 May 2024 • Charikleia Iakovidou, Kibaek Kim
Federated learning (FL) was recently proposed to securely train models with data held over multiple locations ("clients") under the coordination of a central server.
no code implementations • 1 Apr 2024 • Shourya Bose, Yu Zhang, Kibaek Kim
The advent of smart meters has enabled pervasive collection of energy consumption data for training short-term load forecasting models.
2 code implementations • 28 Feb 2024 • Guangji Bai, Yijiang Li, Chen Ling, Kibaek Kim, Liang Zhao
The transformative impact of large language models (LLMs) like LLaMA and GPT on natural language processing is countered by their prohibitive computational demands.
no code implementations • 19 Feb 2024 • Zilinghan Li, Shilan He, Pranshu Chaturvedi, Volodymyr Kindratenko, Eliu A Huerta, Kibaek Kim, Ravi Madduri
Federated learning enables multiple data owners to collaboratively train robust machine learning models without transferring large or sensitive local datasets by only sharing the parameters of the locally trained models.
no code implementations • 11 Jan 2024 • Sihan Zeng, Youngdae Kim, Yuxuan Ren, Kibaek Kim
At the heart of power system operations, alternating current optimal power flow (ACOPF) studies the generation of electric power in the most economical way under network-wide load requirement, and can be formulated as a highly structured non-convex quadratically constrained quadratic program (QCQP).
no code implementations • 21 Nov 2023 • Shourya Bose, Yu Zhang, Kibaek Kim
The widespread adoption of smart meters provides access to detailed and localized load consumption data, suitable for training building-level load forecasting models.
1 code implementation • 26 Sep 2023 • Zilinghan Li, Pranshu Chaturvedi, Shilan He, Han Chen, Gagandeep Singh, Volodymyr Kindratenko, E. A. Huerta, Kibaek Kim, Ravi Madduri
Nonetheless, because of the disparity of computing resources among different clients (i. e., device heterogeneity), synchronous federated learning algorithms suffer from degraded efficiency when waiting for straggler clients.
no code implementations • 22 Sep 2023 • Shourya Bose, Kibaek Kim
The advent of smart meters has enabled pervasive collection of energy consumption data for training short-term load forecasting (STLF) models.
1 code implementation • 17 Aug 2023 • Zilinghan Li, Shilan He, Pranshu Chaturvedi, Trung-Hieu Hoang, Minseok Ryu, E. A. Huerta, Volodymyr Kindratenko, Jordan Fuhrman, Maryellen Giger, Ryan Chard, Kibaek Kim, Ravi Madduri
Cross-silo privacy-preserving federated learning (PPFL) is a powerful tool to collaboratively train robust and generalized machine learning (ML) models without sharing sensitive (e. g., healthcare of financial) local data.
no code implementations • 28 Feb 2023 • Minseok Ryu, Kibaek Kim
This paper considers distributed optimization (DO) where multiple agents cooperate to minimize a global objective function, expressed as a sum of local objectives, subject to some constraints.
no code implementations • 18 Feb 2022 • Minseok Ryu, Kibaek Kim
Differential privacy (DP) techniques can be applied to the federated learning model to statistically guarantee data privacy against inference attacks to communication among the learning agents.
1 code implementation • 8 Feb 2022 • Minseok Ryu, Youngdae Kim, Kibaek Kim, Ravi K. Madduri
Federated learning (FL) enables training models at different sites and updating the weights from the training instead of transferring data to a central location and training as in classical machine learning.
no code implementations • 22 Oct 2021 • Sihan Zeng, Alyssa Kody, Youngdae Kim, Kibaek Kim, Daniel K. Molzahn
We train our RL policy using deep Q-learning, and show that this policy can result in significantly accelerated convergence (up to a 59% reduction in the number of iterations compared to existing, curvature-informed penalty parameter selection methods).
no code implementations • 11 Jun 2021 • Minseok Ryu, Kibaek Kim
Differential privacy (DP) techniques can be applied to the federated learning model to protect data privacy against inference attacks to communication among the learning agents.