no code implementations • 22 Oct 2022 • Bashar Alhafni, Nizar Habash, Houda Bouamor, Ossama Obeid, Sultan Alrowili, Daliyah AlZeer, Khawlah M. Alshanqiti, Ahmed ElBakry, Muhammad ElNokrashy, Mohamed Gabr, Abderrahmane Issam, Abdelrahim Qaddoumi, K. Vijay-Shanker, Mahmoud Zyate
In this paper, we present the results and findings of the Shared Task on Gender Rewriting, which was organized as part of the Seventh Arabic Natural Language Processing Workshop.
In this project, we study a state-of-the-art deep learning model, which we called SSN-4 model here.
Our system obtained 0. 7708 in precision and 0. 7770 in recall, for an F1 score of 0. 7739, demonstrating the effectiveness of using ensembles of BERT-based language models for automatically detecting relations between chemicals and proteins.
In this work, we explore the method of employing contrastive learning to improve the text representation from the BERT model for relation extraction.
In this paper, we will investigate the method of utilizing the entire layer in the fine-tuning process of BERT model.
Adversarial training is a technique of improving model performance by involving adversarial examples in the training process.
Part-of-speech (POS) tagging is a fundamental component for performing natural language tasks such as parsing, information extraction, and question answering.
Actively sampled data can have very different characteristics than passively sampled data.
A survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets.
There is a broad range of BioNLP tasks for which active learning (AL) can significantly reduce annotation costs and a specific AL algorithm we have developed is particularly effective in reducing annotation costs for these tasks.