Search Results for author: Lili Su

Found 17 papers, 2 papers with code

Fair Concurrent Training of Multiple Models in Federated Learning

no code implementations22 Apr 2024 Marie Siew, Haoran Zhang, Jong-Ik Park, Yuezhou Liu, Yichen Ruan, Lili Su, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong

We show how our fairness-based learning and incentive mechanisms impact training convergence and finally evaluate our algorithm with multiple sets of learning tasks on real world datasets.

Empowering Federated Learning with Implicit Gossiping: Mitigating Connection Unreliability Amidst Unknown and Arbitrary Dynamics

no code implementations15 Apr 2024 Ming Xiang, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong, Lili Su

It consists of a parameter server and a possibly large collection of clients (e. g., in cross-device federated learning) that may operate in congested and changing environments.

Federated Learning

Towards Safe Autonomy in Hybrid Traffic: Detecting Unpredictable Abnormal Behaviors of Human Drivers via Information Sharing

no code implementations23 Aug 2023 Jiangwei Wang, Lili Su, Songyang Han, Dongjin Song, Fei Miao

Then through extensive experiments on SUMO simulator, we show that our proposed algorithm has great detection performance in both highway and urban traffic.

Autonomous Vehicles Trajectory Prediction

Fast and Robust State Estimation and Tracking via Hierarchical Learning

no code implementations29 Jun 2023 Connor Mclaughlin, Matthew Ding, Deniz Edogmus, Lili Su

In both algorithms, we use a novel hierarchical push-sum consensus component.

Towards Bias Correction of FedAvg over Nonuniform and Time-Varying Communications

no code implementations1 Jun 2023 Ming Xiang, Stratis Ioannidis, Edmund Yeh, Carlee Joe-Wong, Lili Su

Specifically, in each round $t$, the link between the PS and client $i$ is active with probability $p_i^t$, which is $\textit{unknown}$ to both the PS and the clients.

Federated Learning

Federated Learning in the Presence of Adversarial Client Unavailability

no code implementations31 May 2023 Lili Su, Ming Xiang, Jiaming Xu, Pengkun Yang

Federated learning is a decentralized machine learning framework that enables collaborative model training without revealing raw data.

Federated Learning Selection bias

Distributed Non-Convex Optimization with One-Bit Compressors on Heterogeneous Data: Efficient and Resilient Algorithms

no code implementations3 Oct 2022 Ming Xiang, Lili Su

Federated Learning (FL) is a nascent decentralized learning framework under which a massive collection of heterogeneous clients collaboratively train a model without revealing their local data.

Federated Learning Privacy Preserving

Global Convergence of Federated Learning for Mixed Regression

no code implementations15 Jun 2022 Lili Su, Jiaming Xu, Pengkun Yang

This paper studies the problem of model training under Federated Learning when clients exhibit cluster structure.

Federated Learning regression

A Non-parametric View of FedAvg and FedProx: Beyond Stationary Points

no code implementations29 Jun 2021 Lili Su, Jiaming Xu, Pengkun Yang

We discover that when the data heterogeneity is moderate, a client with limited local data can benefit from a common model with a large federation gain.

Federated Learning regression

On Learning Over-parameterized Neural Networks: A Functional Approximation Perspective

no code implementations NeurIPS 2019 Lili Su, Pengkun Yang

When the network is sufficiently over-parameterized, these matrices individually approximate {\em an} integral operator which is determined by the feature vector distribution $\rho$ only.

Collaboratively Learning the Best Option, Using Bounded Memory

no code implementations22 Feb 2018 Lili Su, Martin Zubeldia, Nancy Lynch

We say an individual learns the best option if eventually (as $t \to \infty$) it pulls only the arm with the highest average reward.

Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent

2 code implementations16 May 2017 Yudong Chen, Lili Su, Jiaming Xu

The total computational complexity of our algorithm is of $O((Nd/m) \log N)$ at each working machine and $O(md + kd \log^3 N)$ at the central server, and the total communication cost is of $O(m d \log N)$.

BIG-bench Machine Learning Federated Learning

Defending Non-Bayesian Learning against Adversarial Attacks

no code implementations28 Jun 2016 Lili Su, Nitin H. Vaidya

This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state.

Cannot find the paper you are looking for? You can Submit a new open access paper.