Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning.
In deep reinforcement learning (RL), adversarial attacks can trick an agent into unwanted states and disrupt training.
We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.
As deep neural networks have become the state of the art for solving complex reinforcement learning tasks, susceptibility to perceptual adversarial examples have become a concern.
We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components.