Search Results for author: Songze Li

Found 23 papers, 4 papers with code

FedMeS: Personalized Federated Continual Learning Leveraging Local Memory

no code implementations19 Apr 2024 Jin Xie, Chenqing Zhu, Songze Li

We focus on the problem of Personalized Federated Continual Learning (PFCL): a group of distributed clients, each with a sequence of local tasks on arbitrary data distributions, collaborate through a central server to train a personalized model at each client, with the model expected to achieve good performance on all local tasks.

InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding

2 code implementations22 Mar 2024 Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Jilan Xu, Zun Wang, Yansong Shi, Tianxiang Jiang, Songze Li, Hongjie Zhang, Yifei HUANG, Yu Qiao, Yali Wang, LiMin Wang

We introduce InternVideo2, a new video foundation model (ViFM) that achieves the state-of-the-art performance in action recognition, video-text tasks, and video-centric dialogue.

 Ranked #1 on Audio Classification on ESC-50 (using extra training data)

Action Classification Action Recognition +12

Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning

1 code implementation25 Apr 2023 Yanbo Dai, Songze Li

In a federated learning (FL) system, distributed clients upload their local models to a central server to aggregate into a global model.

Contrastive Learning Federated Learning +1

Scene Style Text Editing

no code implementations20 Apr 2023 Tonghua Su, Fuxiang Yang, Xiang Zhou, Donglin Di, Zhongjie Wang, Songze Li

Specifically, QuadNet consists of four parts, namely background inpainting, style encoder, content encoder, and fusion generator.

Boosting Cross-task Transferability of Adversarial Patches with Visual Relations

no code implementations11 Apr 2023 Tony Ma, Songze Li, Yisong Xiao, Shunchang Liu

The transferability of adversarial examples is a crucial aspect of evaluating the robustness of deep learning systems, particularly in black-box scenarios.

Image Captioning Object Recognition +3

Stochastic Coded Federated Learning: Theoretical Analysis and Incentive Mechanism Design

no code implementations8 Nov 2022 Yuchang Sun, Jiawei Shao, Yuyi Mao, Songze Li, Jun Zhang

During training, the server computes gradients on the global coded dataset to compensate for the missing model updates of the straggling devices.

Federated Learning Privacy Preserving

DReS-FL: Dropout-Resilient Secure Federated Learning for Non-IID Clients via Secret Data Sharing

no code implementations6 Oct 2022 Jiawei Shao, Yuchang Sun, Songze Li, Jun Zhang

Federated learning (FL) strives to enable collaborative training of machine learning models without centrally collecting clients' private data.

Federated Learning

Secure Embedding Aggregation for Federated Representation Learning

no code implementations18 Jun 2022 Jiaxiang Tang, Jinbao Zhu, Songze Li, Lichao Sun

We consider a federated representation learning framework, where with the assistance of a central server, a group of $N$ distributed clients train collaboratively over their private data, for the representations (or embeddings) of a set of entities (e. g., users in a social network).

Federated Learning Privacy Preserving +1

Secure Federated Clustering

no code implementations31 May 2022 Songze Li, Sizai Hou, Baturalp Buyukates, Salman Avestimehr

We consider a foundational unsupervised learning task of $k$-means data clustering, in a federated learning (FL) setting consisting of a central server and many distributed clients.

Clustering Federated Learning

Generalized Lagrange Coded Computing: A Flexible Computation-Communication Tradeoff for Resilient, Secure, and Private Computation

no code implementations24 Apr 2022 Jinbao Zhu, Hengxuan Tang, Songze Li, Yijia Chang

We consider the problem of evaluating arbitrary multivariate polynomials over a massive dataset containing multiple inputs, on a distributed computing system with a master node and multiple worker nodes.

Distributed Computing

SwiftAgg+: Achieving Asymptotically Optimal Communication Loads in Secure Aggregation for Federated Learning

no code implementations24 Mar 2022 Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali, Songze Li, Giuseppe Caire

We propose SwiftAgg+, a novel secure aggregation protocol for federated learning systems, where a central server aggregates local models of $N \in \mathbb{N}$ distributed users, each of size $L \in \mathbb{N}$, trained on their local data, in a privacy-preserving manner.

Federated Learning Privacy Preserving

SwiftAgg: Communication-Efficient and Dropout-Resistant Secure Aggregation for Federated Learning with Worst-Case Security Guarantees

no code implementations8 Feb 2022 Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali, Songze Li, Giuseppe Caire

We propose SwiftAgg, a novel secure aggregation protocol for federated learning systems, where a central server aggregates local models of $N$ distributed users, each of size $L$, trained on their local data, in a privacy-preserving manner.

Federated Learning Privacy Preserving

Stochastic Coded Federated Learning with Convergence and Privacy Guarantees

no code implementations25 Jan 2022 Yuchang Sun, Jiawei Shao, Songze Li, Yuyi Mao, Jun Zhang

Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework, where many clients collaboratively train a machine learning model by exchanging model updates with a parameter server instead of sharing their raw data.

Federated Learning Privacy Preserving

LightSecAgg: a Lightweight and Versatile Design for Secure Aggregation in Federated Learning

no code implementations29 Sep 2021 Jinhyun So, Chaoyang He, Chien-Sheng Yang, Songze Li, Qian Yu, Ramy E. Ali, Basak Guler, Salman Avestimehr

We also demonstrate that, unlike existing schemes, LightSecAgg can be applied to secure aggregation in the asynchronous FL setting.

Federated Learning

OmniLytics: A Blockchain-based Secure Data Market for Decentralized Machine Learning

no code implementations12 Jul 2021 Jiacheng Liang, Songze Li, Bochuan Cao, Wensi Jiang, Chaoyang He

Utilizing OmniLytics, many distributed data owners can contribute their private data to collectively train an ML model requested by some model owners, and receive compensation for data contribution.

BIG-bench Machine Learning

Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

no code implementations NeurIPS 2018 Youjie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, Alexander Schwing

Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands.

PolyShard: Coded Sharding Achieves Linearly Scaling Efficiency and Security Simultaneously

no code implementations27 Sep 2018 Songze Li, Mingchao Yu, Chien-Sheng Yang, A. Salman Avestimehr, Sreeram Kannan, Pramod Viswanath

In particular, we propose PolyShard: ``polynomially coded sharding'' scheme that achieves information-theoretic upper bounds on the efficiency of the storage, system throughput, as well as on trust, thus enabling a truly scalable system.

Cryptography and Security Distributed, Parallel, and Cluster Computing Information Theory Information Theory

Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy

no code implementations4 Jun 2018 Qian Yu, Songze Li, Netanel Raviv, Seyed Mohammadreza Mousavi Kalan, Mahdi Soltanolkotabi, Salman Avestimehr

We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms.

Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding

no code implementations24 May 2018 Songze Li, Seyed Mohammadreza Mousavi Kalan, Qian Yu, Mahdi Soltanolkotabi, A. Salman Avestimehr

In particular, PCR requires a recovery threshold that scales inversely proportionally with the amount of computation/storage available at each worker.

regression

Coded TeraSort

2 code implementations16 Feb 2017 Songze Li, Sucha Supittayapornpong, Mohammad Ali Maddah-Ali, A. Salman Avestimehr

We focus on sorting, which is the building block of many machine learning algorithms, and propose a novel distributed sorting algorithm, named Coded TeraSort, which substantially improves the execution time of the TeraSort benchmark in Hadoop MapReduce.

Distributed, Parallel, and Cluster Computing Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.