Search Results for author: Rabeeh Karimi Mahabadi

Found 10 papers, 7 papers with code

Prompt-free and Efficient Few-shot Learning with Language Models

1 code implementation ACL 2022 Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Lambert Mathias, Marzieh Saeidi, Veselin Stoyanov, Majid Yazdani

Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score.

Few-Shot Learning

PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models

1 code implementation3 Apr 2022 Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Marzieh Saeidi, Lambert Mathias, Veselin Stoyanov, Majid Yazdani

Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score.

Few-Shot Learning

Variational Information Bottleneck for Effective Low-Resource Fine-Tuning

1 code implementation ICLR 2021 Rabeeh Karimi Mahabadi, Yonatan Belinkov, James Henderson

Moreover, we show that our VIB model finds sentence representations that are more robust to biases in natural language inference datasets, and thereby obtains better generalization to out-of-domain datasets.

Natural Language Inference Pretrained Language Models +1

Compacter: Efficient Low-Rank Hypercomplex Adapter Layers

1 code implementation NeurIPS 2021 Rabeeh Karimi Mahabadi, James Henderson, Sebastian Ruder

In this work, we propose Compacter, a method for fine-tuning large-scale language models with a better trade-off between task performance and the number of trainable parameters than prior work.

Pretrained Language Models

End-to-End Bias Mitigation by Modelling Biases in Corpora

2 code implementations ACL 2020 Rabeeh Karimi Mahabadi, Yonatan Belinkov, James Henderson

We experiment on large-scale natural language inference and fact verification benchmarks, evaluating on out-of-domain datasets that are specifically designed to assess the robustness of models against known biases in the training data.

Fact Verification Natural Language Inference +1

Learning-Based Compressive MRI

no code implementations3 May 2018 Baran Gözcü, Rabeeh Karimi Mahabadi, Yen-Huan Li, Efe Ilıcak, Tolga Çukur, Jonathan Scarlett, Volkan Cevher

In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms have been proposed that can be used with general Fourier subsampling patterns.

Learning Theory

Scalable sparse covariance estimation via self-concordance

no code implementations13 May 2014 Anastasios Kyrillidis, Rabeeh Karimi Mahabadi, Quoc Tran-Dinh, Volkan Cevher

We consider the class of convex minimization problems, composed of a self-concordant function, such as the $\log\det$ metric, a convex data fidelity term $h(\cdot)$ and, a regularizing -- possibly non-smooth -- function $g(\cdot)$.

Cannot find the paper you are looking for? You can Submit a new open access paper.