Model-agnostic interpretation by visualization of feature perturbations

26 Jan 2021  ·  Wilson E. Marcílio-Jr, Danilo M. Eler, Fabrício Breve ·

Interpretation of machine learning models has become one of the most important research topics due to the necessity of maintaining control and avoiding bias in these algorithms. Since many machine learning algorithms are published every day, there is a need for novel model-agnostic interpretation approaches that could be used to interpret a great variety of algorithms. Thus, one advantageous way to interpret machine learning models is to feed different input data to understand the changes in the prediction. Using such an approach, practitioners can define relations among data patterns and a model's decision. This work proposes a model-agnostic interpretation approach that uses visualization of feature perturbations induced by the PSO algorithm. We validate our approach on publicly available datasets, showing the capability to enhance the interpretation of different classifiers while yielding very stable results compared with state-of-the-art algorithms.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here