Search Results for author: Lie He

Found 10 papers, 7 papers with code

Learning from History for Byzantine Robust Optimization

1 code implementation18 Dec 2020 Sai Praneeth Karimireddy, Lie He, Martin Jaggi

Secondly, we prove that even if the aggregation rules may succeed in limiting the influence of the attackers in a single round, the attackers can couple their attacks across time eventually leading to divergence.

Federated Learning Stochastic Optimization

COLA: Decentralized Linear Learning

1 code implementation NeurIPS 2018 Lie He, An Bian, Martin Jaggi

Decentralized machine learning is a promising emerging paradigm in view of global challenges of data ownership and privacy.

BIG-bench Machine Learning CoLA +2

Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing

1 code implementation ICLR 2022 Sai Praneeth Karimireddy, Lie He, Martin Jaggi

In Byzantine robust distributed or federated learning, a central server wants to train a machine learning model over data distributed across multiple workers.

Distributed Optimization Federated Learning

RelaySum for Decentralized Deep Learning on Heterogeneous Data

1 code implementation NeurIPS 2021 Thijs Vogels, Lie He, Anastasia Koloskova, Tao Lin, Sai Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi

A key challenge, primarily in decentralized deep learning, remains the handling of differences between the workers' local data distributions.

Byzantine-Robust Decentralized Learning via ClippedGossip

1 code implementation3 Feb 2022 Lie He, Sai Praneeth Karimireddy, Martin Jaggi

In this paper, we study the challenging task of Byzantine-robust decentralized training on arbitrary communication graphs.

Federated Learning

Provably Personalized and Robust Federated Learning

1 code implementation14 Jun 2023 Mariel Werner, Lie He, Michael Jordan, Martin Jaggi, Sai Praneeth Karimireddy

Identifying clients with similar objectives and learning a model-per-cluster is an intuitive and interpretable approach to personalization in federated learning.

Clustering Personalized Federated Learning +1

Secure Byzantine-Robust Machine Learning

no code implementations8 Jun 2020 Lie He, Sai Praneeth Karimireddy, Martin Jaggi

Increasingly machine learning systems are being deployed to edge servers and devices (e. g. mobile phones) and trained in a collaborative manner.

BIG-bench Machine Learning

Byzantine-Robust Learning on Heterogeneous Datasets via Resampling

no code implementations28 Sep 2020 Lie He, Sai Praneeth Karimireddy, Martin Jaggi

In Byzantine-robust distributed optimization, a central server wants to train a machine learning model over data distributed across multiple workers.

Distributed Optimization

Debiasing Conditional Stochastic Optimization

no code implementations NeurIPS 2023 Lie He, Shiva Prasad Kasiviswanathan

In this paper, we study the conditional stochastic optimization (CSO) problem which covers a variety of applications including portfolio selection, reinforcement learning, robust learning, causal inference, etc.

Causal Inference Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.