1 code implementation • 23 Jan 2023 • Avirup Sil, Jaydeep Sen, Bhavani Iyer, Martin Franz, Kshitij Fadnis, Mihaela Bornea, Sara Rosenthal, Scott McCarley, Rong Zhang, Vishwajeet Kumar, Yulong Li, Md Arafat Sultan, Riyaz Bhat, Radu Florian, Salim Roukos
The field of Question Answering (QA) has made remarkable progress in recent years, thanks to the advent of large pre-trained language models, newer realistic benchmark datasets with leaderboards, and novel algorithms for key components such as retrievers and readers.
no code implementations • 16 Jun 2022 • Scott McCarley, Mihaela Bornea, Sara Rosenthal, Anthony Ferritto, Md Arafat Sultan, Avirup Sil, Radu Florian
Recent machine reading comprehension datasets include extractive and boolean questions but current approaches do not offer integrated support for answering both question types.
1 code implementation • DeepLo 2022 • Xiang Pan, Alex Sheng, David Shimshoni, Aditya Singhal, Sara Rosenthal, Avirup Sil
Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks.
no code implementations • 14 Dec 2021 • Sara Rosenthal, Mihaela Bornea, Avirup Sil, Radu Florian, Scott McCarley
Existing datasets that contain boolean questions, such as BoolQ and TYDI QA , provide the user with a YES/NO response to the question.
no code implementations • 15 Apr 2021 • Sara Rosenthal, Mihaela Bornea, Avirup Sil
Recent approaches have exploited weaknesses in monolingual question answering (QA) models by adding adversarial statements to the passage.
no code implementations • 10 Dec 2020 • Mihaela Bornea, Lin Pan, Sara Rosenthal, Radu Florian, Avirup Sil
Prior work on multilingual question answering has mostly focused on using large multilingual pre-trained language models (LM) to perform zero-shot language-wise learning: train a QA model on English and test on other languages.
no code implementations • COLING 2020 • Anthony Ferritto, Sara Rosenthal, Mihaela Bornea, Kazi Hasan, Rishav Chakravarti, Salim Roukos, Radu Florian, Avi Sil
We also show how M-GAAMA can be used in downstream tasks by incorporating it into an END-TO-END-QA system using CFO (Chakravarti et al., 2019).
no code implementations • SEMEVAL 2020 • Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, Çağrı Çöltekin
We present the results and main findings of SemEval-2020 Task 12 on Multilingual Offensive Language Identification in Social Media (OffensEval 2020).
no code implementations • Findings (ACL) 2021 • Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Marcos Zampieri, Preslav Nakov
The widespread use of offensive content in social media has led to an abundance of research in detecting language such as hate speech, cyberbullying, and cyber-aggression.
no code implementations • SEMEVAL 2013 • Preslav Nakov, Zornitsa Kozareva, Alan Ritter, Sara Rosenthal, Veselin Stoyanov, Theresa Wilson
To address this issue, we have proposed SemEval-2013 Task 2: Sentiment Analysis in Twitter, which included two subtasks: A, an expression-level subtask, and B, a message-level subtask.
no code implementations • SEMEVAL 2014 • Sara Rosenthal, Preslav Nakov, Alan Ritter, Veselin Stoyanov
We describe the Sentiment Analysis in Twitter task, ran as part of SemEval-2014.
no code implementations • SEMEVAL 2015 • Sara Rosenthal, Saif M. Mohammad, Preslav Nakov, Alan Ritter, Svetlana Kiritchenko, Veselin Stoyanov
In this paper, we describe the 2015 iteration of the SemEval shared task on Sentiment Analysis in Twitter.
no code implementations • SEMEVAL 2016 • Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, Veselin Stoyanov
The three new subtasks focus on two variants of the basic ``sentiment classification in Twitter'' task.
no code implementations • SEMEVAL 2017 • Sara Rosenthal, Noura Farra, Preslav Nakov
This paper describes the fifth year of the Sentiment Analysis in Twitter task.
no code implementations • IJCNLP 2019 • Sara Rosenthal, Ken Barker, Zhicheng Liang
We propose using sections from medical literature (e. g., textbooks, journals, web content) that contain content similar to that found in EHR sections.
1 code implementation • SEMEVAL 2019 • Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, Ritesh Kumar
We present the results and the main findings of SemEval-2019 Task 6 on Identifying and Categorizing Offensive Language in Social Media (OffensEval).
2 code implementations • NAACL 2019 • Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, Ritesh Kumar
In particular, we model the task hierarchically, identifying the type and the target of offensive messages in social media.
no code implementations • WS 2018 • Sara Rosenthal, Adam Faulkner
We present a novel annotation task evaluating a patient{'}s engagement with their health care regimen.
no code implementations • LREC 2012 • Jacob Andreas, Sara Rosenthal, Kathleen McKeown
We introduce a new corpus of sentence-level agreement and disagreement annotations over LiveJournal and Wikipedia threads.