Vertical Federated Learning (VFL) is a distributed learning paradigm that allows multiple agents to jointly train a global model when each agent holds a different subset of features for the same sample(s).
Federated learning (FL) provides an efficient training paradigm to jointly train a global model leveraging data from distributed users.
We study the realistic potential of conducting backdoor attack against deep neural networks (DNNs) during deployment stage.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
In this paper, we proposed a novel Style-based Point Generator with Adversarial Rendering (SpareNet) for point cloud completion.
Ranked #1 on Point Cloud Completion on ShapeNet (Earth Mover's Distance metric)
As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance.
Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data.
Federated learning has a variety of applications in multiple domains by utilizing private training data stored on different devices.