no code implementations • EMNLP (ArgMining) 2021 • Mohamed Elaraby, Diane Litman
We provide a thorough investigation on how to utilize pseudo labels effectively in the self-training scheme.
1 code implementation • Findings (EMNLP) 2021 • Ahmed Magooda, Diane Litman, Mohamed Elaraby
In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and paraphrase detection) both individually and in combination, with the goal of enhancing the target task of abstractive summarization via multitask learning.
1 code implementation • 27 Mar 2024 • Yang Zhong, Mohamed Elaraby, Diane Litman, Ahmed Ashraf Butt, Muhsin Menekse
This paper introduces ReflectSumm, a novel summarization dataset specifically designed for summarizing students' reflective writing.
1 code implementation • 15 Oct 2023 • Zhexiong Liu, Mohamed Elaraby, Yang Zhong, Diane Litman
This paper presents an overview of the ImageArg shared task, the first multimodal Argument Mining shared task co-located with the 10th Workshop on Argument Mining at EMNLP 2023.
1 code implementation • 22 Aug 2023 • Mohamed Elaraby, Mengyin Lu, Jacob Dunn, Xueying Zhang, Yu Wang, Shizhu Liu, Pingchuan Tian, Yuping Wang, Yuxuan Wang
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP).
1 code implementation • 1 Jun 2023 • Mohamed Elaraby, Yang Zhong, Diane Litman
We propose a simple approach for the abstractive summarization of long legal opinions that considers the argument structure of the document.
1 code implementation • COLING 2022 • Mohamed Elaraby, Diane Litman
A challenging task when generating summaries of legal documents is the ability to address their argumentative nature.
no code implementations • 17 Sep 2021 • Ahmed Magooda, Mohamed Elaraby, Diane Litman
In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and paraphrase detection) both individually and in combination, with the goal of enhancing the target task of abstractive summarization via multitask learning.
no code implementations • WS 2019 • Mohamed Elaraby, Ahmed Zahran
In this paper, we describe CU-RAISA teamcontribution to the 2019Madar shared task2, which focused on Twitter User fine-grained dialect identification. Among par-ticipating teams, our system ranked the4th(with 61. 54{\%}) F1-Macro measure. Our sys-tem is trained using a character level convo-lutional bidirectional long-short-term memorynetwork trained on 2k users{'} data.
no code implementations • WS 2018 • Hassan Alhuzali, Mohamed Elaraby, Muhammad Abdul-Mageed
We also offer an analysis of system performance and the impact of training data size on the task.
no code implementations • COLING 2018 • Mohamed Elaraby, Muhammad Abdul-Mageed
We treat these two limitations:We (1) benchmark the data, and (2) empirically test6different deep learning methods on thetask, comparing peformance to several classical machine learning models under different condi-tions (i. e., both binary and multi-way classification).
no code implementations • WS 2017 • Hany Ahmed, Mohamed Elaraby, Abdullah M. Mousa, Mostafa Elhosiny, Sherif Abdou, Mohsen Rashwan
The proposed technique is mainly based on I-vectors and Self-Organizing Map Neural Network(SOM). The input to the proposed algorithm is a set of speech utterances.