Search Results for author: Mohammadmahdi Nouriborji

Found 7 papers, 5 papers with code

Nowruz at SemEval-2022 Task 7: Tackling Cloze Tests with Transformers and Ordinal Regression

1 code implementation SemEval (NAACL) 2022 Mohammadmahdi Nouriborji, Omid Rohanian, David Clifton

This paper outlines the system using which team Nowruz participated in SemEval 2022 Task 7 Identifying Plausible Clarifications of Implicit and Underspecified Phrases for both subtasks A and B.

Multi-Task Learning regression

On the Effectiveness of Compact Biomedical Transformers

1 code implementation7 Sep 2022 Omid Rohanian, Mohammadmahdi Nouriborji, Samaneh Kouchaki, David A. Clifton

Language models pre-trained on biomedical corpora, such as BioBERT, have recently shown promising results on downstream biomedical tasks.

Continual Learning Knowledge Distillation +1

MiniALBERT: Model Distillation via Parameter-Efficient Recursive Transformers

1 code implementation12 Oct 2022 Mohammadmahdi Nouriborji, Omid Rohanian, Samaneh Kouchaki, David A. Clifton

Different strategies have been proposed in the literature to alleviate these problems, with the aim to create effective compact models that nearly match the performance of their bloated counterparts with negligible performance losses.

Exploring the Effectiveness of Instruction Tuning in Biomedical Language Processing

no code implementations31 Dec 2023 Omid Rohanian, Mohammadmahdi Nouriborji, David A. Clifton

In this context, our study investigates the potential of instruction tuning for biomedical language processing, applying this technique to two general LLMs of substantial scale.

named-entity-recognition Named Entity Recognition +3

Efficiency at Scale: Investigating the Performance of Diminutive Language Models in Clinical Tasks

no code implementations16 Feb 2024 Niall Taylor, Upamanyu Ghose, Omid Rohanian, Mohammadmahdi Nouriborji, Andrey Kormilitzin, David Clifton, Alejo Nevado-Holgado

The entry of large language models (LLMs) into research and commercial spaces has led to a trend of ever-larger models, with initial promises of generalisability, followed by a widespread desire to downsize and create specialised models without the need for complete fine-tuning, using Parameter Efficient Fine-tuning (PEFT) methods.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.