no code implementations • 27 Apr 2023 • Muhammad Umer, Robi Polikar
We show that our proposed defensive framework considerably improves the performance of class incremental learning algorithms with no knowledge of the attacker's target task, attacker's target class, and attacker's imperceptible pattern.
no code implementations • 28 May 2022 • Glenn Dawson, Muhammad Umer, Robi Polikar
We propose a contributor-aware universal defensive framework for learning in the presence of multiple, potentially adversarial data sources that utilizes semi-supervised ensembles and learning from crowds to filter the false labels produced by adversarial triggers.
no code implementations • 9 Feb 2022 • Muhammad Umer, Robi Polikar
In this brief, we show that sequentially learning new information presented to a continual (incremental) learning model introduces new security risks: an intelligent adversary can introduce small amount of misinformation to the model during training to cause deliberate forgetting of a specific task or class at test time, thus creating "false memory" about that task.
no code implementations • 28 May 2021 • Glenn Dawson, Robi Polikar
We present two adversarial attack vectors that more accurately reflect the label noise that may be encountered in real-world settings, and demonstrate that under our multimodal noisy labels model, state-of-the-art approaches for learning from noisy labels are defeated by adversarial label attacks.
no code implementations • 16 Feb 2021 • Muhammad Umer, Robi Polikar
Perhaps most importantly, we show this vulnerability to be very acute and damaging: the model memory can be easily compromised with the addition of backdoor samples into as little as 1\% of the training data, even when the misinformation is imperceptible to human eye.
no code implementations • 11 Feb 2021 • Glenn Dawson, Robi Polikar
As larger and more comprehensive datasets become standard in contemporary machine learning, it becomes increasingly more difficult to obtain reliable, trustworthy label information with which to train sophisticated models.
no code implementations • 26 Nov 2020 • Muhammad Umer, Robi Polikar
One of the more challenging real-world problems in computational intelligence is to learn from non-stationary streaming data, also known as concept drift.
no code implementations • 17 Feb 2020 • Muhammad Umer, Glenn Dawson, Robi Polikar
Artificial neural networks are well-known to be susceptible to catastrophic forgetting when continually learning from sequences of tasks.
no code implementations • 20 Feb 2018 • Christopher Frederickson, Michael Moore, Glenn Dawson, Robi Polikar
As the prevalence and everyday use of machine learning algorithms, along with our reliance on these algorithms grow dramatically, so do the efforts to attack and undermine these algorithms with malicious intent, resulting in a growing interest in adversarial machine learning.
1 code implementation • IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments 2011 • Gregory Ditzler, Robi Polikar
Most machine learning algorithms, including many online learners, assume that the data distribution to be learned is fixed.