Search Results for author: Hosein Mohebbi

Found 7 papers, 4 papers with code

Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations

no code implementations13 Sep 2021 Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar

Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models.

AdapLeR: Speeding up Inference by Adaptive Length Reduction

1 code implementation ACL 2022 Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar

To determine the importance of each token representation, we train a Contribution Predictor for each layer using a gradient-based saliency method.

Quantifying Context Mixing in Transformers

1 code implementation30 Jan 2023 Hosein Mohebbi, Willem Zuidema, Grzegorz Chrupała, Afra Alishahi

Self-attention weights and their transformed variants have been the main source of information for analyzing token-to-token interactions in Transformer-based models.

DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers

no code implementations5 Oct 2023 Anna Langedijk, Hosein Mohebbi, Gabriele Sarti, Willem Zuidema, Jaap Jumelet

In recent years, many interpretability methods have been proposed to help interpret the internal states of Transformer-models, at different levels of precision and complexity.

Logical Reasoning Machine Translation +3

Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers

1 code implementation15 Oct 2023 Hosein Mohebbi, Grzegorz Chrupała, Willem Zuidema, Afra Alishahi

Transformers have become a key architecture in speech processing, but our understanding of how they build up representations of acoustic and linguistic structure is limited.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.