Search Results for author: Robi Polikar

Found 10 papers, 1 papers with code

Adversary Aware Continual Learning

no code implementations27 Apr 2023 Muhammad Umer, Robi Polikar

We show that our proposed defensive framework considerably improves the performance of class incremental learning algorithms with no knowledge of the attacker's target task, attacker's target class, and attacker's imperceptible pattern.

Class Incremental Learning Incremental Learning +1

Contributor-Aware Defenses Against Adversarial Backdoor Attacks

no code implementations28 May 2022 Glenn Dawson, Muhammad Umer, Robi Polikar

We propose a contributor-aware universal defensive framework for learning in the presence of multiple, potentially adversarial data sources that utilizes semi-supervised ensembles and learning from crowds to filter the false labels produced by adversarial triggers.

Backdoor Attack Image Classification

False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger

no code implementations9 Feb 2022 Muhammad Umer, Robi Polikar

In this brief, we show that sequentially learning new information presented to a continual (incremental) learning model introduces new security risks: an intelligent adversary can introduce small amount of misinformation to the model during training to cause deliberate forgetting of a specific task or class at test time, thus creating "false memory" about that task.

Backdoor Attack Continual Learning +2

Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial Awareness

no code implementations28 May 2021 Glenn Dawson, Robi Polikar

We present two adversarial attack vectors that more accurately reflect the label noise that may be encountered in real-world settings, and demonstrate that under our multimodal noisy labels model, state-of-the-art approaches for learning from noisy labels are defeated by adversarial label attacks.

Adversarial Attack

Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

no code implementations16 Feb 2021 Muhammad Umer, Robi Polikar

Perhaps most importantly, we show this vulnerability to be very acute and damaging: the model memory can be easily compromised with the addition of backdoor samples into as little as 1\% of the training data, even when the misinformation is imperceptible to human eye.

Backdoor Attack Class Incremental Learning +2

OpinionRank: Extracting Ground Truth Labels from Unreliable Expert Opinions with Graph-Based Spectral Ranking

no code implementations11 Feb 2021 Glenn Dawson, Robi Polikar

As larger and more comprehensive datasets become standard in contemporary machine learning, it becomes increasingly more difficult to obtain reliable, trustworthy label information with which to train sophisticated models.

Comparative Analysis of Extreme Verification Latency Learning Algorithms

no code implementations26 Nov 2020 Muhammad Umer, Robi Polikar

One of the more challenging real-world problems in computational intelligence is to learn from non-stationary streaming data, also known as concept drift.

Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks

no code implementations17 Feb 2020 Muhammad Umer, Glenn Dawson, Robi Polikar

Artificial neural networks are well-known to be susceptible to catastrophic forgetting when continually learning from sequences of tasks.

Backdoor Attack Continual Learning +2

Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning

no code implementations20 Feb 2018 Christopher Frederickson, Michael Moore, Glenn Dawson, Robi Polikar

As the prevalence and everyday use of machine learning algorithms, along with our reliance on these algorithms grow dramatically, so do the efforts to attack and undermine these algorithms with malicious intent, resulting in a growing interest in adversarial machine learning.

BIG-bench Machine Learning feature selection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.