Paper

RAILS: A Robust Adversarial Immune-inspired Learning System

Adversarial attacks against deep neural networks are continuously evolving. Without effective defenses, they can lead to catastrophic failure. The long-standing and arguably most powerful natural defense system is the mammalian immune system, which has successfully defended against attacks by novel pathogens for millions of years. In this paper, we propose a new adversarial defense framework, called the Robust Adversarial Immune-inspired Learning System (RAILS). RAILS incorporates an Adaptive Immune System Emulation (AISE), which emulates in silico the biological mechanisms that are used to defend the host against attacks by pathogens. We use RAILS to harden Deep k-Nearest Neighbor (DkNN) architectures against evasion attacks. Evolutionary programming is used to simulate processes in the natural immune system: B-cell flocking, clonal expansion, and affinity maturation. We show that the RAILS learning curve exhibits similar diversity-selection learning phases as observed in our in vitro biological experiments. When applied to adversarial image classification on three different datasets, RAILS delivers an additional 5.62%/12.56%/4.74% robustness improvement as compared to applying DkNN alone, without appreciable loss of accuracy on clean data.

Results in Papers With Code
(↓ scroll down to see all results)