Search Results for author: Xinchi Qiu

Found 21 papers, 6 papers with code

LUNAR: LLM Unlearning via Neural Activation Redirection

no code implementations11 Feb 2025 William F. Shen, Xinchi Qiu, Meghdad Kurmanji, Alex Iacob, Lorenzo Sani, Yihong Chen, Nicola Cancedda, Nicholas D. Lane

Large Language Models (LLMs) benefit from training on ever larger amounts of textual data, but as a result, they increasingly incur the risk of leaking private information.

Photon: Federated LLM Pre-Training

no code implementations5 Nov 2024 Lorenzo Sani, Alex Iacob, Zeyu Cao, Royson Lee, Bill Marino, Yan Gao, Dongqi Cai, Zexi Li, Wanru Zhao, Xinchi Qiu, Nicholas D. Lane

Scaling large language models (LLMs) demands extensive data and computing resources, which are traditionally constrained to data centers by the high-bandwidth requirements of distributed training.

Federated Learning

PISTOL: Dataset Compilation Pipeline for Structural Unlearning of LLMs

no code implementations24 Jun 2024 Xinchi Qiu, William F. Shen, Yihong Chen, Nicola Cancedda, Pontus Stenetorp, Nicholas D. Lane

Recently, machine unlearning, which seeks to erase specific data stored in the pre-trained or fine-tuned models, has emerged as a crucial protective measure for LLMs.

Benchmarking Machine Unlearning

FLea: Addressing Data Scarcity and Label Skew in Federated Learning via Privacy-preserving Feature Augmentation

1 code implementation13 Jun 2024 Tong Xia, Abhirup Ghosh, Xinchi Qiu, Cecilia Mascolo

Federated Learning (FL) enables model development by leveraging data distributed across numerous edge devices without transferring local data to a central server.

Federated Learning Privacy Preserving

Sheaf HyperNetworks for Personalized Federated Learning

no code implementations31 May 2024 Bao Nguyen, Lorenzo Sani, Xinchi Qiu, Pietro Liò, Nicholas D. Lane

Graph hypernetworks (GHNs), constructed by combining graph neural networks (GNNs) with hypernetworks (HNs), leverage relational data across various domains such as neural architecture search, molecular property prediction and federated learning.

Molecular Property Prediction Multi-class Classification +5

The Future of Large Language Model Pre-training is Federated

no code implementations17 May 2024 Lorenzo Sani, Alex Iacob, Zeyu Cao, Bill Marino, Yan Gao, Tomas Paulik, Wanru Zhao, William F. Shen, Preslav Aleksandrov, Xinchi Qiu, Nicholas D. Lane

We further show the effectiveness of the federated training scales with model size and present our approach for training billion-scale federated LLMs using limited resources.

Federated Learning Language Modeling +2

FedAnchor: Enhancing Federated Semi-Supervised Learning with Label Contrastive Loss for Unlabeled Clients

no code implementations15 Feb 2024 Xinchi Qiu, Yan Gao, Lorenzo Sani, Heng Pan, Wanru Zhao, Pedro P. B. Gusmao, Mina Alibeigi, Alex Iacob, Nicholas D. Lane

Federated learning (FL) is a distributed learning paradigm that facilitates collaborative training of a shared global model across devices while keeping data localized.

Federated Learning

FLea: Addressing Data Scarcity and Label Skew in Federated Learning via Privacy-preserving Feature Augmentation

1 code implementation4 Dec 2023 Tong Xia, Abhirup Ghosh, Xinchi Qiu, Cecilia Mascolo

Federated Learning (FL) enables model development by leveraging data distributed across numerous edge devices without transferring local data to a central server.

Federated Learning Privacy Preserving

FedVal: Different good or different bad in federated learning

1 code implementation6 Jun 2023 Viktor Valadi, Xinchi Qiu, Pedro Porto Buarque de Gusmão, Nicholas D. Lane, Mina Alibeigi

In this paper, we present a novel approach FedVal for both robustness and fairness that does not require any additional information from clients that could raise privacy concerns and consequently compromise the integrity of the FL system.

Fairness Federated Learning

Secure Vertical Federated Learning Under Unreliable Connectivity

no code implementations26 May 2023 Xinchi Qiu, Heng Pan, Wanru Zhao, Yan Gao, Pedro P. B. Gusmao, William F. Shen, Chenyang Ma, Nicholas D. Lane

Most work in privacy-preserving federated learning (FL) has focused on horizontally partitioned datasets where clients hold the same features and train complete client-level models independently.

Privacy Preserving Vertical Federated Learning

Evaluating Privacy Leakage in Split Learning

no code implementations22 May 2023 Xinchi Qiu, Ilias Leontiadis, Luca Melis, Alex Sablayrolles, Pierre Stock

In particular, on-device machine learning allows us to avoid sharing raw data with a third-party server during inference.

Privacy Preserving

Efficient Vertical Federated Learning with Secure Aggregation

no code implementations18 May 2023 Xinchi Qiu, Heng Pan, Wanru Zhao, Chenyang Ma, Pedro Porto Buarque de Gusmão, Nicholas D. Lane

The majority of work in privacy-preserving federated learning (FL) has been focusing on horizontally partitioned datasets where clients share the same sets of features and can train complete models independently.

Fraud Detection Privacy Preserving +1

Gradient-less Federated Gradient Boosting Trees with Learnable Learning Rates

1 code implementation15 Apr 2023 Chenyang Ma, Xinchi Qiu, Daniel J. Beutel, Nicholas D. Lane

The privacy-sensitive nature of decentralized datasets and the robustness of eXtreme Gradient Boosting (XGBoost) on tabular data raise the needs to train XGBoost in the context of federated learning (FL).

Federated Learning

ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity

no code implementations ICLR 2022 Xinchi Qiu, Javier Fernandez-Marques, Pedro PB Gusmao, Yan Gao, Titouan Parcollet, Nicholas Donald Lane

When the available hardware cannot meet the memory and compute requirements to efficiently train high performing machine learning models, a compromise in either the training quality or the model complexity is needed.

Federated Learning

Protea: Client Profiling within Federated Systems using Flower

no code implementations3 Jul 2022 Wanru Zhao, Xinchi Qiu, Javier Fernandez-Marques, Pedro P. B. de Gusmão, Nicholas D. Lane

Federated Learning (FL) has emerged as a prospective solution that facilitates the training of a high-performing centralised model without compromising the privacy of users.

Federated Learning

On-device Federated Learning with Flower

no code implementations7 Apr 2021 Akhil Mathur, Daniel J. Beutel, Pedro Porto Buarque de Gusmão, Javier Fernandez-Marques, Taner Topal, Xinchi Qiu, Titouan Parcollet, Yan Gao, Nicholas D. Lane

Federated Learning (FL) allows edge devices to collaboratively learn a shared prediction model while keeping their training data on the device, thereby decoupling the ability to do machine learning from the need to store data in the cloud.

BIG-bench Machine Learning Federated Learning

A first look into the carbon footprint of federated learning

no code implementations15 Feb 2021 Xinchi Qiu, Titouan Parcollet, Javier Fernandez-Marques, Pedro Porto Buarque de Gusmao, Yan Gao, Daniel J. Beutel, Taner Topal, Akhil Mathur, Nicholas D. Lane

Despite impressive results, deep learning-based technologies also raise severe privacy and environmental concerns induced by the training procedure often conducted in data centers.

Federated Learning

Flower: A Friendly Federated Learning Research Framework

1 code implementation28 Jul 2020 Daniel J. Beutel, Taner Topal, Akhil Mathur, Xinchi Qiu, Javier Fernandez-Marques, Yan Gao, Lorenzo Sani, Kwing Hei Li, Titouan Parcollet, Pedro Porto Buarque de Gusmão, Nicholas D. Lane

Federated Learning (FL) has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model, while keeping their training data on the device, thereby decoupling the ability to do machine learning from the need to store the data in the cloud.

Federated Learning

Quaternion Neural Networks for Multi-channel Distant Speech Recognition

1 code implementation18 May 2020 Xinchi Qiu, Titouan Parcollet, Mirco Ravanelli, Nicholas Lane, Mohamed Morchid

In this paper, we propose to capture these inter- and intra- structural dependencies with quaternion neural networks, which can jointly process multiple signals as whole quaternion entities.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Cannot find the paper you are looking for? You can Submit a new open access paper.