Existing trackers can be categorized into two association paradigms: single-feature paradigm (based on either motion or appearance feature) and serial paradigm (one feature serves as secondary while the other is primary).
Existing defenses focus on preventing a small number of malicious clients from poisoning the global model via robust federated learning methods and detecting malicious clients when there are a large number of them.
Our key idea is to divide the clients into groups, learn a global model for each group of clients using any existing federated learning method, and take a majority vote among the global models to classify a test input.
FLDetector aims to detect and remove the majority of the malicious clients such that a Byzantine-robust FL method can learn an accurate global model using the remaining clients.
Motion and interaction of social insects (such as ants) have been studied by many researchers to understand the clustering mechanism.
Specifically, we assume the attacker injects fake clients to a federated learning system and sends carefully crafted fake local model updates to the cloud server during training, such that the learnt global model has low accuracy for many indiscriminate test inputs.
A key limitation of passive detection is that it cannot detect fake faces that are generated by new deepfake generation methods.
However, the optimal control of autonomous greenhouses is challenging, requiring decision-making based on high-dimensional sensory data, and the scaling of production is limited by the scarcity of labor capable of handling this task.
Existing studies mainly focused on improving the detection performance in non-adversarial settings, leaving security of deepfake detection in adversarial settings largely unexplored.
We show that our ensemble federated learning with any base federated learning algorithm is provably secure against malicious clients.
Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model.
Moreover, our evaluation results on MNIST and CIFAR10 show that the intrinsic certified robustness guarantees of kNN and rNN outperform those provided by state-of-the-art certified defenses.
For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69. 2\% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.
Specifically, we prove the certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation.
Cryptography and Security
Specifically, we show that bagging with an arbitrary base learning algorithm provably predicts the same label for a testing example when the number of modified, deleted, and/or inserted training examples is bounded by a threshold.
Specifically, in this work, we study the feasibility and effectiveness of certifying robustness against backdoor attacks using a recent technique called randomized smoothing.
However, several recent studies showed that community detection is vulnerable to adversarial structural perturbation.
For example, our method can obtain an ImageNet classifier with a certified top-5 accuracy of 62. 8\% when the $\ell_2$-norms of the adversarial perturbations are less than 0. 5 (=127/255).
Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices.
Local Differential Privacy (LDP) protocols enable an untrusted data collector to perform privacy-preserving data analytics.
Data Poisoning Cryptography and Security Distributed, Parallel, and Cluster Computing
Our key observation is that a DNN classifier can be uniquely represented by its classification boundary.
Our key observation is that adversarial examples are close to the classification boundary.