1 code implementation • LREC 2022 • Rafael Jimenez Silva, Kaushik Gedela, Alex Marr, Bart Desmet, Carolyn Rose, Chunxiao Zhou
In this paper we contribute QA4IE, a comprehensive QA tool for information extraction, which can (1) detect potential problems in text annotations in a timely manner, (2) accurately assess the quality of annotations, (3) visually display and summarize annotation discrepancies among annotation team members, (4) provide a comprehensive statistics report, and (5) support viewing of annotated documents interactively.
no code implementations • Findings (NAACL) 2022 • Ritam Dutt, Kasturi Bhattacharjee, Rashmi Gangadharaiah, Dan Roth, Carolyn Rose
The above concerns motivate our question answer- ing setting over personalized knowledge graphs (PERKGQA) where each user has restricted access to their KG.
no code implementations • SIGDIAL (ACL) 2020 • Yansen Wang, R. Charles Murray, Haogang Bao, Carolyn Rose
For the past 15 years, in computer-supported collaborative learning applications, conversational agents have been used to structure group interactions in online chat-based environments.
no code implementations • EMNLP 2020 • Yansen Wang, Zhen Fan, Carolyn Rose
Open-domain Keyphrase extraction (KPE) on the Web is a fundamental yet complex NLP task with a wide range of practical applications within the field of Information Retrieval.
no code implementations • 1 Sep 2022 • Hao-Ren Yao, Luke Breitfeller, Aakanksha Naik, Chunxiao Zhou, Carolyn Rose
Our model uses a BERT-based language model to encode local context and a Graph Neural Network (GNN) to represent global document-level syntactic and temporal characteristics.
1 code implementation • 2 Nov 2021 • Aakanksha Naik, Jill Lehman, Carolyn Rose
We reflect on the question: have transfer learning methods sufficiently addressed the poor performance of benchmark-trained models on the long tail?
no code implementations • 15 May 2021 • Luke Breitfeller, Aakanksha Naik, Carolyn Rose
We demonstrate the utility of extracted cues by integrating them with an event ordering model using a joint BiLSTM and ILP constraint architecture.
no code implementations • NAACL 2021 • Sopan Khosla, James Fiacco, Carolyn Rose
Recent work on entity coreference resolution (CR) follows current trends in Deep Learning applied to embeddings and relatively simple task-related features.
no code implementations • EACL 2021 • Qinlan Shen, Carolyn Rose
In this paper, we challenge the assumption that political ideology is inherently built into text by presenting an investigation into the impact of experiential factors on annotator perceptions of political ideology.
no code implementations • EMNLP (CODI) 2020 • Sopan Khosla, Carolyn Rose
Coreference resolution (CR) is an essential part of discourse analysis.
1 code implementation • EMNLP 2020 • Sopan Khosla, Shikhar Vashishth, Jill Fain Lehman, Carolyn Rose
In this paper, we propose the novel modeling approach MedFilter, which addresses these insights in order to increase performance at identifying and categorizing task-relevant utterances, and in so doing, positively impacts performance at a downstream information extraction task.
no code implementations • EACL 2021 • Aakanksha Naik, Jill Lehman, Carolyn Rose
Our best-performing models reach F1 scores of 70. 0 and 72. 9 on notes and conversations respectively, using no labeled data from target domains.
1 code implementation • 1 May 2020 • Shikhar Vashishth, Denis Newman-Griffis, Rishabh Joshi, Ritam Dutt, Carolyn Rose
To address the dearth of annotated training data for medical entity linking, we present WikiMed and PubMedDS, two large-scale medical entity linking datasets, and demonstrate that pre-training MedType on these datasets further improves entity linking performance.
no code implementations • WS 2019 • Aakanksha Naik, Luke Breitfeller, Carolyn Rose
Prior work on temporal relation classification has focused extensively on event pairs in the same or adjacent sentences (local), paying scant attention to discourse-level (global) pairs.
1 code implementation • WS 2019 • Xinru Yan, Aakanksha Naik, Yohan Jo, Carolyn Rose
We propose a novel take on understanding narratives in social media, focusing on learning {''}functional story schemas{''}, which consist of sets of stereotypical functional structures.
1 code implementation • WS 2019 • Qinlan Shen, Carolyn Rose
Recent concerns over abusive behavior on their platforms have pressured social media companies to strengthen their content moderation policies.
no code implementations • ACL 2019 • James Fiacco, Samridhi Choudhary, Carolyn Rose
We introduce a general method for the interpretation and comparison of neural models.
no code implementations • ACL 2019 • Aakanksha Naik, Ravich, Abhilasha er, Carolyn Rose, Eduard Hovy
In this work, we show that existing embedding models are inadequate at constructing representations that capture salient aspects of mathematical meaning for numbers, which is important for language understanding.
1 code implementation • CONLL 2019 • Abhilasha Ravichander, Aakanksha Naik, Carolyn Rose, Eduard Hovy
Quantitative reasoning is a higher-order reasoning skill that any intelligent natural language understanding system can reasonably be expected to handle.
1 code implementation • COLING 2018 • Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, Graham Neubig
Natural language inference (NLI) is the task of determining if a natural language hypothesis can be inferred from a given premise in a justifiable manner.
no code implementations • WS 2017 • Aakanksha Naik, Chris Bogart, Carolyn Rose
In this paper, we describe a system for automatic construction of user disease progression timelines from their posts in online support groups using minimal supervision.