ReX: A Framework for Incorporating Temporal Information in Model-Agnostic Local Explanation Techniques

8 Sep 2022  ·  Junhao Liu, Xin Zhang ·

Neural network models that can handle inputs of variable lengths are powerful, but often hard to interpret. The lack of transparency hinders their adoption in many domains. Explanation techniques are essential for improving transparency. However, existing model-agnostic general explanation techniques do not consider the variable lengths of input data points, which limits their effectiveness. To address this limitation, we propose ReX, a general framework for adapting various explanation techniques to models that process variable-length inputs, expanding explanation coverage to data points of different lengths. Our approach adds temporal information to the explanations generated by existing techniques without altering their core algorithms. We instantiate our approach on two popular explanation techniques: LIME and Anchors. To evaluate the effectiveness of ReX, we apply our approach to three models in two different tasks. Our evaluation results demonstrate that our approach significantly improves the fidelity and understandability of explanations.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods