no code implementations • 11 Oct 2022 • You Guo, Jun Wang, Trevor Cohn
Deep neural networks are vulnerable to adversarial attacks, such as backdoor attacks in which a malicious adversary compromises a model during training such that specific behaviour can be triggered at test time by attaching a specific word or phrase to an input.