Search Results for author: Stefan Lazov

Found 1 papers, 0 papers with code

Is Sparse Attention more Interpretable?

no code implementations ACL 2021 Clara Meister, Stefan Lazov, Isabelle Augenstein, Ryan Cotterell

Sparse attention has been claimed to increase model interpretability under the assumption that it highlights influential inputs.

text-classification Text Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.