no code implementations • 19 Apr 2024 • Jin Xie, Chenqing Zhu, Songze Li
We focus on the problem of Personalized Federated Continual Learning (PFCL): a group of distributed clients, each with a sequence of local tasks on arbitrary data distributions, collaborate through a central server to train a personalized model at each client, with the model expected to achieve good performance on all local tasks.
2 code implementations • 22 Mar 2024 • Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Jilan Xu, Zun Wang, Yansong Shi, Tianxiang Jiang, Songze Li, Hongjie Zhang, Yifei HUANG, Yu Qiao, Yali Wang, LiMin Wang
We introduce InternVideo2, a new video foundation model (ViFM) that achieves the state-of-the-art performance in action recognition, video-text tasks, and video-centric dialogue.
Ranked #1 on Audio Classification on ESC-50 (using extra training data)
no code implementations • 26 Apr 2023 • Songze Li, Duanyi Yao, Jin Liu
The problem of split VFL is to train a model split between the server and the clients.
1 code implementation • 25 Apr 2023 • Yanbo Dai, Songze Li
In a federated learning (FL) system, distributed clients upload their local models to a central server to aggregate into a global model.
no code implementations • 20 Apr 2023 • Tonghua Su, Fuxiang Yang, Xiang Zhou, Donglin Di, Zhongjie Wang, Songze Li
Specifically, QuadNet consists of four parts, namely background inpainting, style encoder, content encoder, and fusion generator.
no code implementations • 11 Apr 2023 • Tony Ma, Songze Li, Yisong Xiao, Shunchang Liu
The transferability of adversarial examples is a crucial aspect of evaluating the robustness of deep learning systems, particularly in black-box scenarios.
no code implementations • 8 Nov 2022 • Yuchang Sun, Jiawei Shao, Yuyi Mao, Songze Li, Jun Zhang
During training, the server computes gradients on the global coded dataset to compensate for the missing model updates of the straggling devices.
no code implementations • 6 Oct 2022 • Jiawei Shao, Yuchang Sun, Songze Li, Jun Zhang
Federated learning (FL) strives to enable collaborative training of machine learning models without centrally collecting clients' private data.
no code implementations • 18 Jun 2022 • Jiaxiang Tang, Jinbao Zhu, Songze Li, Lichao Sun
We consider a federated representation learning framework, where with the assistance of a central server, a group of $N$ distributed clients train collaboratively over their private data, for the representations (or embeddings) of a set of entities (e. g., users in a social network).
no code implementations • 31 May 2022 • Songze Li, Sizai Hou, Baturalp Buyukates, Salman Avestimehr
We consider a foundational unsupervised learning task of $k$-means data clustering, in a federated learning (FL) setting consisting of a central server and many distributed clients.
no code implementations • 24 Apr 2022 • Jinbao Zhu, Hengxuan Tang, Songze Li, Yijia Chang
We consider the problem of evaluating arbitrary multivariate polynomials over a massive dataset containing multiple inputs, on a distributed computing system with a master node and multiple worker nodes.
no code implementations • 24 Mar 2022 • Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali, Songze Li, Giuseppe Caire
We propose SwiftAgg+, a novel secure aggregation protocol for federated learning systems, where a central server aggregates local models of $N \in \mathbb{N}$ distributed users, each of size $L \in \mathbb{N}$, trained on their local data, in a privacy-preserving manner.
no code implementations • 8 Feb 2022 • Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali, Songze Li, Giuseppe Caire
We propose SwiftAgg, a novel secure aggregation protocol for federated learning systems, where a central server aggregates local models of $N$ distributed users, each of size $L$, trained on their local data, in a privacy-preserving manner.
no code implementations • 25 Jan 2022 • Yuchang Sun, Jiawei Shao, Songze Li, Yuyi Mao, Jun Zhang
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework, where many clients collaboratively train a machine learning model by exchanging model updates with a parameter server instead of sharing their raw data.
no code implementations • 29 Sep 2021 • Jinhyun So, Chaoyang He, Chien-Sheng Yang, Songze Li, Qian Yu, Ramy E. Ali, Basak Guler, Salman Avestimehr
We also demonstrate that, unlike existing schemes, LightSecAgg can be applied to secure aggregation in the asynchronous FL setting.
no code implementations • 12 Jul 2021 • Jiacheng Liang, Songze Li, Bochuan Cao, Wensi Jiang, Chaoyang He
Utilizing OmniLytics, many distributed data owners can contribute their private data to collectively train an ML model requested by some model owners, and receive compensation for data contribution.
5 code implementations • 27 Jul 2020 • Chaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, Xinghua Zhu, Jianzong Wang, Li Shen, Peilin Zhao, Yan Kang, Yang Liu, Ramesh Raskar, Qiang Yang, Murali Annavaram, Salman Avestimehr
Federated learning (FL) is a rapidly growing research field in machine learning.
no code implementations • NeurIPS 2018 • Youjie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, Alexander Schwing
Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands.
no code implementations • NeurIPS 2018 • Mingchao Yu, Zhifeng Lin, Krishna Narra, Songze Li, Youjie Li, Nam Sung Kim, Alexander Schwing, Murali Annavaram, Salman Avestimehr
Data parallelism can boost the training speed of convolutional neural networks (CNN), but could suffer from significant communication costs caused by gradient aggregation.
no code implementations • 27 Sep 2018 • Songze Li, Mingchao Yu, Chien-Sheng Yang, A. Salman Avestimehr, Sreeram Kannan, Pramod Viswanath
In particular, we propose PolyShard: ``polynomially coded sharding'' scheme that achieves information-theoretic upper bounds on the efficiency of the storage, system throughput, as well as on trust, thus enabling a truly scalable system.
Cryptography and Security Distributed, Parallel, and Cluster Computing Information Theory Information Theory
no code implementations • 4 Jun 2018 • Qian Yu, Songze Li, Netanel Raviv, Seyed Mohammadreza Mousavi Kalan, Mahdi Soltanolkotabi, Salman Avestimehr
We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms.
no code implementations • 24 May 2018 • Songze Li, Seyed Mohammadreza Mousavi Kalan, Qian Yu, Mahdi Soltanolkotabi, A. Salman Avestimehr
In particular, PCR requires a recovery threshold that scales inversely proportionally with the amount of computation/storage available at each worker.
2 code implementations • 16 Feb 2017 • Songze Li, Sucha Supittayapornpong, Mohammad Ali Maddah-Ali, A. Salman Avestimehr
We focus on sorting, which is the building block of many machine learning algorithms, and propose a novel distributed sorting algorithm, named Coded TeraSort, which substantially improves the execution time of the TeraSort benchmark in Hadoop MapReduce.
Distributed, Parallel, and Cluster Computing Information Theory Information Theory