no code implementations • CRAC (ACL) 2021 • YIlun Zhu, Sameer Pradhan, Amir Zeldes
SOTA coreference resolution produces increasingly impressive scores on the OntoNotes benchmark.
1 code implementation • 25 Mar 2024 • YIlun Zhu, Siyao Peng, Sameer Pradhan, Amir Zeldes
We then propose a two-step neural mention and coreference resolution system, named SPLICE, and compare its performance to the end-to-end approach in two scenarios: the OntoNotes test set and the out-of-domain (OOD) OntoGUM corpus.
no code implementations • 1 Feb 2024 • YIlun Zhu, Joel Ruben Antony Moniz, Shruti Bhargava, Jiarui Lu, Dhivya Piraviperumal, Site Li, Yuan Zhang, Hong Yu, Bo-Hsiang Tseng
Understanding context is key to understanding human language, an ability which Large Language Models (LLMs) have been increasingly seen to demonstrate to an impressive extent.
1 code implementation • 20 Sep 2023 • YIlun Zhu, Siyao Peng, Sameer Pradhan, Amir Zeldes
Previous attempts to incorporate a mention detection step into end-to-end neural coreference resolution for English have been hampered by the lack of singleton mention span data as well as other entity information.
Ranked #1 on Coreference Resolution on OntoGUM
no code implementations • 29 Jul 2023 • YIlun Zhu, Clayton Scott, Darren Holland, George Landon, Aaron Fjeldsted, Azaree Lintereur
Many nuclear safety applications need fast, portable, and accurate imagers to better locate radiation sources.
1 code implementation • 3 Jun 2023 • Tatsuya Aoyama, Shabnam Behzad, Luke Gessler, Lauren Levine, Jessica Lin, Yang Janet Liu, Siyao Peng, YIlun Zhu, Amir Zeldes
We evaluate state-of-the-art NLP systems on GENTLE and find severe degradation for at least some genres in their performance on all tasks, which indicates GENTLE's utility as an evaluation dataset for NLP systems.
1 code implementation • 2 Jun 2023 • YIlun Zhu, Aaron Fjeldsted, Darren Holland, George Landon, Azaree Lintereur, Clayton Scott
The task of mixture proportion estimation (MPE) is to estimate the weight of a component distribution in a mixture, given observations from both the component and mixture.
1 code implementation • CRAC (ACL) 2022 • Zdeněk Žabokrtský, Miloslav Konopík, Anna Nedoluzhko, Michal Novák, Maciej Ogrodniczuk, Martin Popel, Ondřej Pražák, Jakub Sido, Daniel Zeman, YIlun Zhu
The public edition of CorefUD 1. 0, which contains 13 datasets for 10 languages, was used as the source of training and evaluation data.
no code implementations • 12 Oct 2021 • YIlun Zhu, Sameer Pradhan, Amir Zeldes
SOTA coreference resolution produces increasingly impressive scores on the OntoNotes benchmark.
1 code implementation • EMNLP (DISRPT) 2021 • Luke Gessler, Shabnam Behzad, Yang Janet Liu, Siyao Peng, YIlun Zhu, Amir Zeldes
This paper describes our submission to the DISRPT2021 Shared Task on Discourse Unit Segmentation, Connective Detection, and Relation Classification.
1 code implementation • ACL 2021 • YIlun Zhu, Sameer Pradhan, Amir Zeldes
SOTA coreference resolution produces increasingly impressive scores on the OntoNotes benchmark.
Ranked #2 on Coreference Resolution on OntoGUM
1 code implementation • LREC 2020 • Luke Gessler, Siyao Peng, Yang Liu, YIlun Zhu, Shabnam Behzad, Amir Zeldes
We present a freely available, genre-balanced English web corpus totaling 4M tokens and featuring a large number of high-quality automatic annotation layers, including dependency trees, non-named entity annotations, coreference resolution, and discourse trees in Rhetorical Structure Theory.
no code implementations • LREC 2020 • Siyao Peng, Yang Liu, YIlun Zhu, Austin Blodgett, Yushi Zhao, Nathan Schneider
Adpositions are frequent markers of semantic relations, but they are highly ambiguous and vary significantly from language to language.
1 code implementation • WS 2019 • Yue Yu, YIlun Zhu, Yang Liu, Yan Liu, Siyao Peng, Mackenzie Gong, Amir Zeldes
In this paper we present GumDrop, Georgetown University's entry at the DISRPT 2019 Shared Task on automatic discourse unit segmentation and connective detection.
no code implementations • 6 Dec 2018 • YIlun Zhu, Yang Liu, Siyao Peng, Austin Blodgett, Yushi Zhao, Nathan Schneider
This study adapts Semantic Network of Adposition and Case Supersenses (SNACS) annotation to Mandarin Chinese and demonstrates that the same supersense categories are appropriate for Chinese adposition semantics.