no code implementations • 25 May 2025 • Jingyuan Liu, Zeyu Zhang, Xuchuang Wang, Xutong Liu, John C. S. Lui, Mohammad Hajiesmaili, Carlee Joe-Wong
The key challenge in Off-ClusBand arises from data insufficiency for users: unlike the online case, in the offline case, we have a fixed, limited dataset to work from and thus must determine whether we have enough data to confidently cluster users together.
no code implementations • 7 Apr 2025 • Haoran Zhang, Zejun Gong, Zekai Li, Marie Siew, Carlee Joe-Wong, Rachid El-Azouzi
In this work, we develop a novel convergence analysis of MMFL with arbitrary client sampling methods, theoretically demonstrating the strengths and limitations of previous well-established gradient-based methods.
1 code implementation • 9 Mar 2025 • Tianshu Huang, Arjun Ramesh, Emily Ruppel, Nuno Pereira, Anthony Rowe, Carlee Joe-Wong
Accurately estimating workload runtime is a longstanding goal in computer systems, and plays a key role in efficient resource provisioning, latency minimization, and various other system management tasks.
no code implementations • 8 Feb 2025 • Hanqing Yang, Jingdi Chen, Marie Siew, Tania Lorido-Botran, Carlee Joe-Wong
Instead of fully sharing information from all past experiences, DAMCS introduces a multi-modal memory system organized as a hierarchical knowledge graph and a structured communication protocol to optimize agent cooperation.
no code implementations • 17 Jan 2025 • Ishank Juneja, Carlee Joe-Wong, Osman Yağan
We introduce the Pairwise-Elimination (PE) algorithm for the known reference arm variant and generalize PE to PE-CS for the subsidized best reward variant.
no code implementations • 23 Dec 2024 • Jong-Ik Park, Carlee Joe-Wong
Federated learning (FL) addresses privacy concerns in training language models by enabling multiple clients to contribute to the training, without sending their data to others.
no code implementations • 20 Dec 2024 • Siddharth Ambekar, Yuhang Yao, Ryan Li, Carlee Joe-Wong
Cross-client edges arise naturally in such cases and present an interesting challenge to federated training methods, as training a graph model at one client requires feature information of nodes on the other end of cross-client edges.
no code implementations • 2 Dec 2024 • Pamely Zantou, Blessed Guda, Bereket Retta, Gladys Inabeza, Carlee Joe-Wong, Assane Gueye
BA is one of the primary causes of neonatal death in the world.
no code implementations • 24 Oct 2024 • Jong-Ik Park, Srinivasa Pranav, José M. F. Moura, Carlee Joe-Wong
Foundation models are now a major focus of leading technology organizations due to their ability to generalize across diverse tasks.
no code implementations • 24 Oct 2024 • I-Cheng Lin, Osman Yagan, Carlee Joe-Wong
Federated learning has recently gained popularity as a framework for distributed clients to collaboratively train a machine learning model using local data.
no code implementations • 21 Oct 2024 • Baris Askin, Pranay Sharma, Gauri Joshi, Carlee Joe-Wong
We study a federated version of multi-objective optimization (MOO), where a single model is trained to optimize multiple objective functions.
no code implementations • 21 Oct 2024 • Jingdi Chen, Hanhan Zhou, Yongsheng Mei, Carlee Joe-Wong, Gina Adam, Nathaniel D. Bastian, Tian Lan
Deep Reinforcement Learning (DRL) algorithms have achieved great success in solving many challenging tasks while their black-box nature hinders interpretability and real-world applicability, making it difficult for human experts to interpret and understand DRL policies.
no code implementations • 18 Oct 2024 • Baran Atalar, Carlee Joe-Wong
We consider the contextual combinatorial bandit setting where in each round, the learning agent, e. g., a recommender system, selects a subset of "arms," e. g., products, and observes rewards for both the individual base arms, which are a function of known features (called "context"), and the super arm (the subset of arms), which is a function of the base arm rewards.
1 code implementation • 8 Oct 2024 • Yuhang Yao, Yuan Li, Xinyi Fan, Junhao Li, Kay Liu, Weizhao Jin, Yu Yang, Srivatsan Ravi, Philip S. Yu, Carlee Joe-Wong
Federated graph learning is an emerging field with significant practical challenges.
1 code implementation • 26 Sep 2024 • Ming Xiang, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong, Lili Su
Addressing intermittent client availability is critical for the real-world deployment of federated learning algorithms.
no code implementations • 24 Sep 2024 • Yuhang Yao, Jianyi Zhang, Junda Wu, Chengkai Huang, Yu Xia, Tong Yu, Ruiyi Zhang, Sungchul Kim, Ryan Rossi, Ang Li, Lina Yao, Julian McAuley, Yiran Chen, Carlee Joe-Wong
Large language models are rapidly gaining popularity and have been widely adopted in real-world applications.
1 code implementation • 21 Sep 2024 • Blessed Guda, Gabrial Zencha A., Lawrence Francis, Carlee Joe-Wong
Large Language models (LLMs) have brought about substantial advancements in the field of Question Answering (QA) systems.
no code implementations • 14 Jun 2024 • Jong-Ik Park, Carlee Joe-Wong
To address this need, this paper introduces Federated Learning with Flexible Architectures (FedFA), an FL training algorithm that allows clients to train models of different widths and depths.
no code implementations • 1 Jun 2024 • Baris Askin, Pranay Sharma, Carlee Joe-Wong, Gauri Joshi
Much of the existing work in FL focuses on efficiently learning a model for a single task.
no code implementations • 22 Apr 2024 • Marie Siew, Haoran Zhang, Jong-Ik Park, Yuezhou Liu, Yichen Ruan, Lili Su, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong
We show how our fairness-based learning and incentive mechanisms impact training convergence and finally evaluate our algorithm with multiple sets of learning tasks on real world datasets.
no code implementations • 17 Apr 2024 • Xuechen Zhang, Zijian Huang, Ege Onur Taga, Carlee Joe-Wong, Samet Oymak, Jiasi Chen
Recent successes in natural language processing have led to the proliferation of large language models (LLMs) by multiple providers.
no code implementations • 15 Apr 2024 • Ming Xiang, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong, Lili Su
It consists of a parameter server and a possibly large collection of clients (e. g., in cross-device federated learning) that may operate in congested and changing environments.
no code implementations • 27 Mar 2024 • Yi Hu, Jinhang Zuo, Alanis Zhao, Bob Iannucci, Carlee Joe-Wong
Foundation models (FMs) emerge as a promising solution to harness distributed and diverse environmental data by leveraging prior knowledge to understand the complicated temporal and spatial correlations within heterogeneous datasets.
1 code implementation • 25 Mar 2024 • Hanqing Yang, Marie Siew, Carlee Joe-Wong
In this paper, we present a case study that employs LLM agents to mimic the behaviors and thermal preferences of various population groups (e. g. young families, the elderly) in a shopping mall.
no code implementations • 20 Oct 2023 • Weijie Liu, Xiaoxi Zhang, Jingpu Duan, Carlee Joe-Wong, Zhi Zhou, Xu Chen
Federated Learning (FL) is a distributed learning paradigm that can coordinate heterogeneous edge devices to perform model training without sharing private data.
no code implementations • 17 Oct 2023 • Taejin Kim, Jiarui Li, Shubhranshu Singh, Nikhil Madaan, Carlee Joe-Wong
Our research, initially spurred by test-time evasion attacks, investigates the intersection of adversarial training and backdoor attacks within federated learning, introducing Adversarial Robustness Unhardening (ARU).
no code implementations • 19 Aug 2023 • Yi Hu, Jinhang Zuo, Bob Iannucci, Carlee Joe-Wong
Internet of Things (IoT) technologies have enabled numerous data-driven mobile applications and have the potential to significantly improve environmental monitoring and hazard warnings through the deployment of a network of IoT sensors.
Intelligent Communication
Multi-agent Reinforcement Learning
+1
1 code implementation • 7 Aug 2023 • Jingdi Chen, Tian Lan, Carlee Joe-Wong
This result enables us to recast multi-agent communication into a novel online clustering problem over the local observations at each agent, with messages as cluster labels and the upper bound on the return gap as clustering loss.
1 code implementation • 8 Jun 2023 • Shanshan Han, Baturalp Buyukates, Zijian Hu, Han Jin, Weizhao Jin, Lichao Sun, Xiaoyang Wang, Wenxuan Wu, Chulin Xie, Yuhang Yao, Kai Zhang, Qifan Zhang, Yuhui Zhang, Carlee Joe-Wong, Salman Avestimehr, Chaoyang He
This paper introduces FedSecurity, an end-to-end benchmark that serves as a supplementary component of the FedML library for simulating adversarial attacks and corresponding defense mechanisms in Federated Learning (FL).
no code implementations • 1 Jun 2023 • Ming Xiang, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong, Lili Su
Specifically, in each round $t$, the link between the PS and client $i$ is active with probability $p_i^t$, which is $\textit{unknown}$ to both the PS and the clients.
1 code implementation • 23 May 2023 • Yi Hu, Chaoran Zhang, Edward Andert, Harshul Singh, Aviral Shrivastava, James Laudon, Yanqi Zhou, Bob Iannucci, Carlee Joe-Wong
Careful placement of a computational application within a target device cluster is critical for achieving low application completion time.
2 code implementations • 20 Mar 2023 • Weizhao Jin, Yuhang Yao, Shanshan Han, Jiajun Gu, Carlee Joe-Wong, Srivatsan Ravi, Salman Avestimehr, Chaoyang He
Federated Learning trains machine learning models on distributed devices by aggregating local model updates instead of local data.
2 code implementations • 13 Nov 2022 • Yuhang Yao, Mohammad Mahdi Kamani, Zhongwei Cheng, Lin Chen, Carlee Joe-Wong, Tianqiang Liu
Much of the value that IoT (Internet-of-Things) devices bring to ``smart'' homes lies in their ability to automatically trigger other devices' actions: for example, a smart camera triggering a smart lock to unlock a door.
no code implementations • 28 Sep 2022 • Marie Siew, Shikhar Sharma, Zekai Li, Kun Guo, Chao Xu, Tania Lorido-Botran, Tony Q. S. Quek, Carlee Joe-Wong
In edge computing, users' service profiles are migrated due to user mobility.
1 code implementation • 17 Sep 2022 • Taejin Kim, Shubhranshu Singh, Nikhil Madaan, Carlee Joe-Wong
However, combining adversarial training with personalized federated learning frameworks increases relative internal attack robustness by 60% compared to federated adversarial training and performs well under limited system resources.
no code implementations • 6 Sep 2022 • Jinhang Zuo, Songwen Hu, Tong Yu, Shuai Li, Handong Zhao, Carlee Joe-Wong
To achieve this, the recommender system conducts conversations with users, asking their preferences for different items or item categories.
no code implementations • 31 Aug 2022 • Xutong Liu, Jinhang Zuo, Siwei Wang, Carlee Joe-Wong, John C. S. Lui, Wei Chen
Under this new condition, we propose a BCUCB-T algorithm with variance-aware confidence intervals and conduct regret analysis which reduces the $O(K)$ factor to $O(\log K)$ or $O(\log^2 K)$ in the regret bound, significantly improving the regret bounds for the above applications.
no code implementations • 24 May 2022 • Zifan Wang, Yuhang Yao, Chaoran Zhang, Han Zhang, Youjie Kang, Carlee Joe-Wong, Matt Fredrikson, Anupam Datta
Second, our analytical and empirical results demonstrate that feature attribution methods cannot capture the nonlinear effect of edge features, while existing subgraph explanation methods are not faithful.
2 code implementations • NeurIPS 2023 • Yuhang Yao, Weizhao Jin, Srivatsan Ravi, Carlee Joe-Wong
Methods for training models on graphs distributed across multiple clients have recently grown in popularity, due to the size of these graphs as well as regulations on keeping data where it is generated.
no code implementations • 11 Dec 2021 • Yichen Ruan, Carlee Joe-Wong
Traditionally, clustered federated learning groups clients with the same data distribution into a cluster, so that every client is uniquely associated with one data distribution and helps train a model for this distribution.
no code implementations • 11 Oct 2021 • Yucai Fan, Yuhang Yao, Carlee Joe-Wong
These works, however, do not fully address the challenge of flexibly assigning different importance to snapshots of the graph at different times, which depending on the graph dynamics may have more or less predictive power on the labels.
1 code implementation • 10 May 2021 • Jinhang Zuo, Carlee Joe-Wong
In doing so, the decision maker should learn the value of the resources allocated for each user from feedback on each user's received reward.
2 code implementations • 16 Dec 2020 • Yuhang Yao, Carlee Joe-Wong
We characterize the optimal decay rate for each cluster and propose a clustering method that achieves almost exact recovery of the true clusters.
1 code implementation • 5 Oct 2020 • Sheikh Shams Azam, Taejin Kim, Seyyedali Hosseinalipour, Carlee Joe-Wong, Saurabh Bagchi, Christopher Brinton
We study the problem of learning representations that are private yet informative, i. e., provide information about intended "ally" targets while hiding sensitive "adversary" attributes.
no code implementations • 17 Sep 2020 • Xuan Chen, Zifan Wang, Yucai Fan, Bonan Jin, Piotr Mardziel, Carlee Joe-Wong, Anupam Datta
Feature attribution has been a foundational building block for explaining the input feature importance in supervised learning with Deep Neural Network (DNNs), but face new challenges when applied to deep Reinforcement Learning (RL). We propose a new approach to explaining deep RL actions by defining a class of \emph{action reconstruction} functions that mimic the behavior of a network in deep RL.
no code implementations • 24 Jun 2020 • Jinhang Zuo, Xutong Liu, Carlee Joe-Wong, John C. S. Lui, Wei Chen
In this paper, we introduce a new Online Competitive Influence Maximization (OCIM) problem, where two competing items (e. g., products, news stories) propagate in the same network and influence probabilities on edges are unknown.
no code implementations • 12 Jun 2020 • Yichen Ruan, Xiaoxi Zhang, Shu-Che Liang, Carlee Joe-Wong
Traditional federated learning algorithms impose strict requirements on the participation rates of devices, which limit the potential reach of federated learning.
no code implementations • 17 Apr 2020 • Yuwei Tu, Yichen Ruan, Su Wang, Satyavrat Wagle, Christopher G. Brinton, Carlee Joe-Wong
Unlike traditional federated learning frameworks, our method enables devices to offload their data processing tasks to each other, with these decisions determined through a convex data transfer optimization problem that trades off costs associated with devices processing, offloading, and discarding data points.
Distributed, Parallel, and Cluster Computing
no code implementations • 12 Mar 2020 • Xiaoxi Zhang, Jian-Yu Wang, Gauri Joshi, Carlee Joe-Wong
Due to the massive size of the neural network models and training datasets used in machine learning today, it is imperative to distribute stochastic gradient descent (SGD) by splitting up tasks such as gradient evaluation across multiple worker nodes.
no code implementations • 21 Nov 2019 • Jinhang Zuo, Xiaoxi Zhang, Carlee Joe-Wong
We consider the stochastic multi-armed bandit (MAB) problem in a setting where a player can pay to pre-observe arm rewards before playing an arm in each round.
no code implementations • 13 Apr 2018 • Takuma Oda, Carlee Joe-Wong
Since DQNs scale poorly with a large number of possible dispatches, we streamline our DQN training and suppose that each individual vehicle independently learns its own optimal policy, ensuring scalability at the cost of less coordination between vehicles.
1 code implementation • 4 Dec 2017 • Abhinav Jauhri, Carlee Joe-Wong, John Paul Shen
Motivated by ride-sharing platforms' efforts to reduce their riders' wait times for a vehicle, this paper introduces a novel problem of placing vehicles to fulfill real-time pickup requests in a spatially and temporally changing environment.