1 code implementation • EMNLP 2021 • Hosein Mohebbi, Ali Modarressi, Mohammad Taher Pilehvar
Several studies have been carried out on revealing linguistic features captured by BERT.
no code implementations • 13 Sep 2021 • Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar
Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models.
1 code implementation • ACL 2022 • Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar
To determine the importance of each token representation, we train a Contribution Predictor for each layer using a gradient-based saliency method.
1 code implementation • 30 Jan 2023 • Hosein Mohebbi, Willem Zuidema, Grzegorz Chrupała, Afra Alishahi
Self-attention weights and their transformed variants have been the main source of information for analyzing token-to-token interactions in Transformer-based models.
no code implementations • 5 Oct 2023 • Anna Langedijk, Hosein Mohebbi, Gabriele Sarti, Willem Zuidema, Jaap Jumelet
In recent years, many interpretability methods have been proposed to help interpret the internal states of Transformer-models, at different levels of precision and complexity.
1 code implementation • 15 Oct 2023 • Hosein Mohebbi, Grzegorz Chrupała, Willem Zuidema, Afra Alishahi
Transformers have become a key architecture in speech processing, but our understanding of how they build up representations of acoustic and linguistic structure is limited.
no code implementations • EMNLP (BlackboxNLP) 2021 • Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar
Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models.