1 code implementation • 13 Oct 2022 • Nils Feldhus, Leonhard Hennig, Maximilian Dustin Nasert, Christopher Ebert, Robert Schwarzenberg, Sebastian Möller
Saliency maps can explain a neural model's predictions by identifying important input features.
Abstractive Text Summarization Feature Importance +4