However, these methods still follow the design of traditional supervised adversarial training, limiting the potential of adversarial training on ViTs.
This work shows that by using adversarial personas, one can overcome safety mechanisms set out by ChatGPT and Bard.
Rotation symmetric Boolean functions represent an interesting class of Boolean functions as they are relatively rare compared to general Boolean functions.
This paper provides a detailed experimentation with evolutionary algorithms with the goal of evolving (anti-)self-dual bent Boolean functions.
Our findings highlight the urgency to address such vulnerabilities and provide insights into potential countermeasures for securing DNN models against backdoors on tabular data.
We use a momentum gradient mechanism to choose the attack node features in the feature selection module.
Optical Character Recognition (OCR) is a widely used tool to extract text from scanned documents.
In this work, we are the first (to the best of our knowledge) to investigate label inference attacks on VFL using a zero-background knowledge strategy.
Recently, researchers have successfully employed Graph Neural Networks (GNNs) to build enhanced recommender systems due to their capability to learn patterns from the interaction between involved entities.
There is no work analyzing and explaining the backdoor attack performance when injecting triggers into the most important or least important area in the sample, which we refer to as trigger-injecting strategies MIAS and LIAS, respectively.
Recent studies demonstrate that collaborative learning models, specifically federated learning, are vulnerable to security and privacy attacks such as model inference and backdoor attacks.
Deep neural networks (DNNs) have demonstrated remarkable performance across various tasks, including image and speech recognition.
Ranked #1 on Image Classification on DVS128 Gesture
By observing the range of possible changes an operator can provide, as well as relative probabilities of specific transitions in the objective space, one can use this information to design a more effective combination of genetic operators.
In this paper, we propose a novel method, IB-RAR, which uses Information Bottleneck (IB) to strengthen adversarial robustness for both adversarial training and non-adversarial-trained methods.
Nevertheless, it is vulnerable to backdoor attacks that modify the training set to embed a secret functionality in the trained model.
This paper proposes a backdoor detection method by utilizing a special type of adversarial attack, universal adversarial perturbation (UAP), and its similarities with a backdoor trigger.
One example of such a property is called boomerang uniformity, which helps to be resilient against boomerang attacks.
The design of binary error-correcting codes is a challenging optimization problem with several applications in telecommunications and storage, which has also been addressed with metaheuristic techniques and evolutionary algorithms.
This work explores stylistic triggers for backdoor attacks in the audio domain: dynamic transformations of malicious samples through guitar effects.
Membership Inference Attacks (MIAs) infer whether a data point is in the training data of a machine learning model.
It is therefore currently impossible to ensure Byzantine robustness and confidentiality of updates without assuming a semi-honest majority.
Even in those scenarios, our label-only MIA achieves a better attack performance in most cases.
It was recently shown that countermeasures in image classification, like Neural Cleanse and ABS, could be bypassed with dynamic triggers that are effective regardless of their pattern and location.
Finding balanced, highly nonlinear Boolean functions is a difficult problem where it is not known what nonlinearity values are possible to be reached in general.
While there is no reason to doubt the performance of CMA-ES, the lack of comparison with different metaheuristics and results for the challenge-response pair-based attack leaves open questions if there are better-suited metaheuristics for the problem.
Finding Boolean functions suitable for cryptographic primitives is a complex combinatorial optimization problem, since they must satisfy several properties to resist cryptanalytic attacks, and the space is very large, which grows super exponentially with the number of input variables.
To further explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for a different number of clients, trigger sizes, poisoning intensities, and trigger densities.
The experiments show that our framework can verify the ownership of GNN models with a very high probability (up to $99\%$) for both tasks.
We consider the setting where the attacker can access an ATM PIN pad of the same brand/model as the target one.
The winning trees can be used to initialize the population for the new GP run and result in improved convergence and fitness, provided some conditions on the size of solutions and winning trees are fulfilled.
This work explores backdoor attacks for automatic speech recognition systems where we inject inaudible triggers.
Reversible Cellular Automata (RCA) are a particular kind of shift-invariant transformations characterized by a dynamics composed only of disjoint cycles.
This paper investigates the influence of genotype size on evolutionary algorithms' performance.
1 code implementation • 1 Dec 2020 • Burak Yildiz, Hayley Hung, Jesse H. Krijthe, Cynthia C. S. Liem, Marco Loog, Gosia Migut, Frans Oliehoek, Annibale Panichella, Przemyslaw Pawelczak, Stjepan Picek, Mathijs de Weerdt, Jan van Gemert
We present ReproducedPapers. org: an open online repository for teaching and structuring machine learning reproducibility.
Genetic programming is an often-used technique for symbolic regression: finding symbolic expressions that match data from an unknown function.
In the second experiment, we train a GP convolutional predictor on two degraded images, removing around 20% of their pixels.
Tasks related to Natural Language Processing (NLP) have recently been the focus of a large research endeavor by the machine learning community.