no code implementations • LREC 2020 • Kyeongmin Rim, Jingxuan Tu, Kelley Lynch, James Pustejovsky
Within the natural language processing (NLP) community, shared tasks play an important role.
no code implementations • NAACL 2021 • Qingyun Wang, Manling Li, Xuan Wang, Nikolaus Parulian, Guangxing Han, Jiawei Ma, Jingxuan Tu, Ying Lin, Haoran Zhang, Weili Liu, Aabhas Chauhan, Yingjun Guan, Bangzheng Li, Ruisong Li, Xiangchen Song, Yi R. Fung, Heng Ji, Jiawei Han, Shih-Fu Chang, James Pustejovsky, Jasmine Rah, David Liem, Ahmed Elsayed, Martha Palmer, Clare Voss, Cynthia Schneider, Boyan Onyshkevych
To combat COVID-19, both clinicians and scientists need to digest vast amounts of relevant biomedical knowledge in scientific literature to understand the disease mechanism and related biological functions.
no code implementations • NAACL 2021 • Jingxuan Tu, Marc Verhagen, Brent Cochran, James Pustejovsky
We are developing semantic visualization techniques in order to enhance exploration and enable discovery over large datasets of complex networks of relations.
no code implementations • EACL 2021 • Jingxuan Tu, Constantine Lignos
We propose the Tough Mentions Recall (TMR) metrics to supplement traditional named entity recognition (NER) evaluation by examining recall on specific subsets of "tough" mentions: unseen mentions, those whose tokens or token/type combination were not observed in training, and type-confusable mentions, token sequences with multiple entity types in the test data.
no code implementations • 12 May 2021 • James Pustejovsky, Eben Holderness, Jingxuan Tu, Parker Glenn, Kyeongmin Rim, Kelley Lynch, Richard Brutti
In this paper, we argue that the design and development of multimodal datasets for natural language processing (NLP) challenges should be enhanced in two significant respects: to more broadly represent commonsense semantic inferences; and to better reflect the dynamics of actions and events, through a substantive alignment of textual and visual information.
no code implementations • SemEval (NAACL) 2022 • Jingxuan Tu, Eben Holderness, Marco Maru, Simone Conia, Kyeongmin Rim, Kelley Lynch, Richard Brutti, Roberto Navigli, James Pustejovsky
In this task, we identify a challenge that is reflective of linguistic and cognitive competencies that humans have when speaking and reasoning.
no code implementations • LREC 2022 • Nancy Ide, Keith Suderman, Jingxuan Tu, Marc Verhagen, Shanan Peters, Ian Ross, John Lawson, Andrew Borg, James Pustejovsky
This paper provides an overview of the xDD/LAPPS Grid framework and provides results of evaluating the AskMe retrievalengine using the BEIR benchmark datasets.
no code implementations • COLING 2022 • Jingxuan Tu, Kyeongmin Rim, James Pustejovsky
Models of natural language understanding often rely on question answering and logical inference benchmark challenges to evaluate the performance of a system.
no code implementations • 20 Oct 2022 • Jingxuan Tu, Kyeongmin Rim, Eben Holderness, James Pustejovsky
Understanding inferences and answering questions from text requires more than merely recovering surface arguments, adjuncts, or strings associated with the query terms.
1 code implementation • 26 Mar 2024 • Ibrahim Khebour, Kenneth Lai, Mariah Bradford, Yifan Zhu, Richard Brutti, Christopher Tam, Jingxuan Tu, Benjamin Ibarra, Nathaniel Blanchard, Nikhil Krishnaswamy, James Pustejovsky
Within Dialogue Modeling research in AI and NLP, considerable attention has been spent on ``dialogue state tracking'' (DST), which is the ability to update the representations of the speaker's needs at each turn in the dialogue by taking into account the past dialogue moves and history.