Search Results for author: Omid Rohanian

Found 18 papers, 7 papers with code

Bridging the Gap: Attending to Discontinuity in Identification of Multiword Expressions

2 code implementations NAACL 2019 Omid Rohanian, Shiva Taslimipoor, Samaneh Kouchaki, Le An Ha, Ruslan Mitkov

We introduce a new method to tag Multiword Expressions (MWEs) using a linguistically interpretable language-independent deep learning architecture.

TAG

On the Effectiveness of Compact Biomedical Transformers

1 code implementation7 Sep 2022 Omid Rohanian, Mohammadmahdi Nouriborji, Samaneh Kouchaki, David A. Clifton

Language models pre-trained on biomedical corpora, such as BioBERT, have recently shown promising results on downstream biomedical tasks.

Continual Learning Knowledge Distillation +1

SHOMA at Parseme Shared Task on Automatic Identification of VMWEs: Neural Multiword Expression Tagging with High Generalisation

1 code implementation9 Sep 2018 Shiva Taslimipoor, Omid Rohanian

This paper presents a language-independent deep learning architecture adapted to the task of multiword expression (MWE) identification.

Word Embeddings

MiniALBERT: Model Distillation via Parameter-Efficient Recursive Transformers

1 code implementation12 Oct 2022 Mohammadmahdi Nouriborji, Omid Rohanian, Samaneh Kouchaki, David A. Clifton

Different strategies have been proposed in the literature to alleviate these problems, with the aim to create effective compact models that nearly match the performance of their bloated counterparts with negligible performance losses.

Nowruz at SemEval-2022 Task 7: Tackling Cloze Tests with Transformers and Ordinal Regression

1 code implementation SemEval (NAACL) 2022 Mohammadmahdi Nouriborji, Omid Rohanian, David Clifton

This paper outlines the system using which team Nowruz participated in SemEval 2022 Task 7 Identifying Plausible Clarifications of Implicit and Underspecified Phrases for both subtasks A and B.

Multi-Task Learning regression

WLV at SemEval-2018 Task 3: Dissecting Tweets in Search of Irony

no code implementations SEMEVAL 2018 Omid Rohanian, Shiva Taslimipoor, Richard Evans, Ruslan Mitkov

This paper describes the systems submitted to SemEval 2018 Task 3 {``}Irony detection in English tweets{''} for both subtasks A and B.

Sentiment Analysis

Combining Multiple Corpora for Readability Assessment for People with Cognitive Disabilities

no code implementations WS 2017 Victoria Yaneva, Constantin Or{\u{a}}san, Richard Evans, Omid Rohanian

Given the lack of large user-evaluated corpora in disability-related NLP research (e. g. text simplification or readability assessment for people with cognitive disabilities), the question of choosing suitable training data for NLP models is not straightforward.

Text Simplification

Using Gaze Data to Predict Multiword Expressions

no code implementations RANLP 2017 Omid Rohanian, Shiva Taslimipoor, Victoria Yaneva, Le An Ha

In recent years gaze data has been increasingly used to improve and evaluate NLP models due to the fact that it carries information about the cognitive processing of linguistic phenomena.

Part-Of-Speech Tagging POS +1

Cross-lingual Transfer Learning and Multitask Learning for Capturing Multiword Expressions

no code implementations WS 2019 Shiva Taslimipoor, Omid Rohanian, Le An Ha

Recent developments in deep learning have prompted a surge of interest in the application of multitask and transfer learning to NLP problems.

Cross-Lingual Transfer Dependency Parsing +1

Privacy-aware Early Detection of COVID-19 through Adversarial Training

no code implementations9 Jan 2022 Omid Rohanian, Samaneh Kouchaki, Andrew Soltan, Jenny Yang, Morteza Rohanian, Yang Yang, David Clifton

One of our main contributions is that we specifically target the development of effective COVID-19 detection models with built-in mechanisms in order to selectively protect sensitive attributes against adversarial attacks.

Exploring the Effectiveness of Instruction Tuning in Biomedical Language Processing

no code implementations31 Dec 2023 Omid Rohanian, Mohammadmahdi Nouriborji, David A. Clifton

In this context, our study investigates the potential of instruction tuning for biomedical language processing, applying this technique to two general LLMs of substantial scale.

named-entity-recognition Named Entity Recognition +3

Efficiency at Scale: Investigating the Performance of Diminutive Language Models in Clinical Tasks

no code implementations16 Feb 2024 Niall Taylor, Upamanyu Ghose, Omid Rohanian, Mohammadmahdi Nouriborji, Andrey Kormilitzin, David Clifton, Alejo Nevado-Holgado

The entry of large language models (LLMs) into research and commercial spaces has led to a trend of ever-larger models, with initial promises of generalisability, followed by a widespread desire to downsize and create specialised models without the need for complete fine-tuning, using Parameter Efficient Fine-tuning (PEFT) methods.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.