1 code implementation • 23 Jan 2023 • Avirup Sil, Jaydeep Sen, Bhavani Iyer, Martin Franz, Kshitij Fadnis, Mihaela Bornea, Sara Rosenthal, Scott McCarley, Rong Zhang, Vishwajeet Kumar, Yulong Li, Md Arafat Sultan, Riyaz Bhat, Radu Florian, Salim Roukos
The field of Question Answering (QA) has made remarkable progress in recent years, thanks to the advent of large pre-trained language models, newer realistic benchmark datasets with leaderboards, and novel algorithms for key components such as retrievers and readers.
Recent machine reading comprehension datasets include extractive and boolean questions but current approaches do not offer integrated support for answering both question types.
Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks.
Existing datasets that contain boolean questions, such as BoolQ and TYDI QA , provide the user with a YES/NO response to the question.
Recent approaches have exploited weaknesses in monolingual question answering (QA) models by adding adversarial statements to the passage.
Prior work on multilingual question answering has mostly focused on using large multilingual pre-trained language models (LM) to perform zero-shot language-wise learning: train a QA model on English and test on other languages.
We also show how M-GAAMA can be used in downstream tasks by incorporating it into an END-TO-END-QA system using CFO (Chakravarti et al., 2019).
We present the results and main findings of SemEval-2020 Task 12 on Multilingual Offensive Language Identification in Social Media (OffensEval 2020).
The widespread use of offensive content in social media has led to an abundance of research in detecting language such as hate speech, cyberbullying, and cyber-aggression.
To address this issue, we have proposed SemEval-2013 Task 2: Sentiment Analysis in Twitter, which included two subtasks: A, an expression-level subtask, and B, a message-level subtask.
In this paper, we describe the 2015 iteration of the SemEval shared task on Sentiment Analysis in Twitter.
The three new subtasks focus on two variants of the basic ``sentiment classification in Twitter'' task.
We propose using sections from medical literature (e. g., textbooks, journals, web content) that contain content similar to that found in EHR sections.
We present the results and the main findings of SemEval-2019 Task 6 on Identifying and Categorizing Offensive Language in Social Media (OffensEval).
In particular, we model the task hierarchically, identifying the type and the target of offensive messages in social media.