Search Results for author: Hosein Mohebbi

Found 4 papers, 2 papers with code

AdapLeR: Speeding up Inference by Adaptive Length Reduction

1 code implementation ACL 2022 Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar

To determine the importance of each token representation, we train a Contribution Predictor for each layer using a gradient-based saliency method.

Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations

no code implementations13 Sep 2021 Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar

Most of the recent works on probing representations have focused on BERT, with the presumption that the findings might be similar to the other models.

Cannot find the paper you are looking for? You can Submit a new open access paper.