A Formal Language Approach to Explaining RNNs

12 Jun 2020  ·  Bishwamittra Ghosh, Daniel Neider ·

This paper presents LEXR, a framework for explaining the decision making of recurrent neural networks (RNNs) using a formal description language called Linear Temporal Logic (LTL). LTL is the de facto standard for the specification of temporal properties in the context of formal verification and features many desirable properties that make the generated explanations easy for humans to interpret: it is a descriptive language, it has a variable-free syntax, and it can easily be translated into plain English. To generate explanations, LEXR follows the principle of counterexample-guided inductive synthesis and combines Valiant's probably approximately correct learning (PAC) with constraint solving. We prove that LEXR's explanations satisfy the PAC guarantee (provided the RNN can be described by LTL) and show empirically that these explanations are more accurate and easier-to-understand than the ones generated by recent algorithms that extract deterministic finite automata from RNNs.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here