Does Symbolic Knowledge Prevent Adversarial Fooling?

19 Dec 2019  ·  Stefano Teso ·

Arguments in favor of injecting symbolic knowledge into neural architectures abound. When done right, constraining a sub-symbolic model can substantially improve its performance and sample complexity and prevent it from predicting invalid configurations. Focusing on deep probabilistic (logical) graphical models -- i.e., constrained joint distributions whose parameters are determined (in part) by neural nets based on low-level inputs -- we draw attention to an elementary but unintended consequence of symbolic knowledge: that the resulting constraints can propagate the negative effects of adversarial examples.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here