no code implementations • 26 Sep 2024 • Ming Xiang, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong, Lili Su
Addressing intermittent client availability is critical for the real-world deployment of federated learning algorithms.
no code implementations • 25 Sep 2024 • Ruining Yang, Lili Su
In this paper, to mitigate data redundancy in the over-represented driving scenarios and to reduce the bias rooted in the data scarcity of complex ones, we propose a novel data-efficient training method based on coreset selection.
no code implementations • 25 Sep 2024 • Chao Huang, Wenshuo Zang, Carlo Pinciroli, Zhi Jane Li, Taposh Banerjee, Lili Su, Rui Liu
The prediction accuracy and adaptation speed results show the effectiveness of PLBA in preference learning and MRS behavior adaption.
no code implementations • 25 Sep 2024 • Tongfei, Guo, Taposh Banerjee, Rui Liu, Lili Su
Trajectory prediction describes the motions of surrounding moving obstacles for an autonomous vehicle; it plays a crucial role in enabling timely decision-making, such as collision avoidance and trajectory replanning.
no code implementations • 7 Sep 2024 • Xiaochun Niu, Lili Su, Jiaming Xu, Pengkun Yang
In this paper, we identify the optimal statistical rate when clients share a common low-dimensional linear representation.
no code implementations • 5 Sep 2024 • Muxing Wang, Pengkun Yang, Lili Su
We prove that, for a wide range of stepsizes, the $\ell_{\infty}$ norm of the error cannot decay faster than $\Theta (E/T)$.
no code implementations • 22 Apr 2024 • Marie Siew, Haoran Zhang, Jong-Ik Park, Yuezhou Liu, Yichen Ruan, Lili Su, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong
We show how our fairness-based learning and incentive mechanisms impact training convergence and finally evaluate our algorithm with multiple sets of learning tasks on real world datasets.
no code implementations • 15 Apr 2024 • Ming Xiang, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong, Lili Su
It consists of a parameter server and a possibly large collection of clients (e. g., in cross-device federated learning) that may operate in congested and changing environments.
1 code implementation • 12 Mar 2024 • Mingze Wang, Lili Su, Cilin Yan, Sheng Xu, Pengcheng Yuan, XiaoLong Jiang, Baochang Zhang
RSBuilding is designed to enhance cross-scene generalization and task universality.
no code implementations • 23 Aug 2023 • Jiangwei Wang, Lili Su, Songyang Han, Dongjin Song, Fei Miao
Then through extensive experiments on SUMO simulator, we show that our proposed algorithm has great detection performance in both highway and urban traffic.
no code implementations • 27 Jul 2023 • Connor Mclaughlin, Matthew Ding, Denis Edogmus, Lili Su
On network communication, we consider packet-dropping link failures.
no code implementations • 29 Jun 2023 • Connor Mclaughlin, Matthew Ding, Deniz Erdogmus, Lili Su
Fast and reliable state estimation and tracking are essential for real-time situation awareness in Cyber-Physical Systems (CPS) operating in tactical environments or complicated civilian environments.
no code implementations • 1 Jun 2023 • Ming Xiang, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong, Lili Su
Specifically, in each round $t$, the link between the PS and client $i$ is active with probability $p_i^t$, which is $\textit{unknown}$ to both the PS and the clients.
no code implementations • 31 May 2023 • Lili Su, Ming Xiang, Jiaming Xu, Pengkun Yang
Federated learning is a decentralized machine learning framework that enables collaborative model training without revealing raw data.
no code implementations • 8 Mar 2023 • Muzi Peng, Jiangwei Wang, Dongjin Song, Fei Miao, Lili Su
Deep learning is the method of choice for trajectory prediction for autonomous vehicles.
no code implementations • 3 Oct 2022 • Ming Xiang, Lili Su
Federated Learning (FL) is a nascent decentralized learning framework under which a massive collection of heterogeneous clients collaboratively train a model without revealing their local data.
no code implementations • 15 Jun 2022 • Lili Su, Jiaming Xu, Pengkun Yang
This paper studies the problem of model training under Federated Learning when clients exhibit cluster structure.
no code implementations • 29 Jun 2021 • Lili Su, Jiaming Xu, Pengkun Yang
We discover that when the data heterogeneity is moderate, a client with limited local data can benefit from a common model with a large federation gain.
no code implementations • NeurIPS 2019 • Lili Su, Pengkun Yang
When the network is sufficiently over-parameterized, these matrices individually approximate {\em an} integral operator which is determined by the feature vector distribution $\rho$ only.
no code implementations • 26 Apr 2018 • Lili Su, Jiaming Xu
Nevertheless, the empirical risk (sample version) is allowed to be non-convex.
no code implementations • 22 Feb 2018 • Lili Su, Martin Zubeldia, Nancy Lynch
We say an individual learns the best option if eventually (as $t \to \infty$) it pulls only the arm with the highest average reward.
2 code implementations • 16 May 2017 • Yudong Chen, Lili Su, Jiaming Xu
The total computational complexity of our algorithm is of $O((Nd/m) \log N)$ at each working machine and $O(md + kd \log^3 N)$ at the central server, and the total communication cost is of $O(m d \log N)$.
no code implementations • 28 Jun 2016 • Lili Su, Nitin H. Vaidya
This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state.