Interpreting Neural Networks With Nearest Neighbors

WS 2018 Eric WallaceShi FengJordan Boyd-Graber

Local model interpretation methods explain individual predictions by assigning an importance value to each input feature. This value is often determined by measuring the change in confidence when a feature is removed... (read more)

PDF Abstract

Evaluation Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help community compare results to other papers.