Search Results for author: Jingxuan Tu

Found 10 papers, 1 papers with code

Exploration and Discovery of the COVID-19 Literature through Semantic Visualization

no code implementations NAACL 2021 Jingxuan Tu, Marc Verhagen, Brent Cochran, James Pustejovsky

We are developing semantic visualization techniques in order to enhance exploration and enable discovery over large datasets of complex networks of relations.

Knowledge Graphs TAG

TMR: Evaluating NER Recall on Tough Mentions

no code implementations EACL 2021 Jingxuan Tu, Constantine Lignos

We propose the Tough Mentions Recall (TMR) metrics to supplement traditional named entity recognition (NER) evaluation by examining recall on specific subsets of "tough" mentions: unseen mentions, those whose tokens or token/type combination were not observed in training, and type-confusable mentions, token sequences with multiple entity types in the test data.

named-entity-recognition Named Entity Recognition +1

Designing Multimodal Datasets for NLP Challenges

no code implementations12 May 2021 James Pustejovsky, Eben Holderness, Jingxuan Tu, Parker Glenn, Kyeongmin Rim, Kelley Lynch, Richard Brutti

In this paper, we argue that the design and development of multimodal datasets for natural language processing (NLP) challenges should be enhanced in two significant respects: to more broadly represent commonsense semantic inferences; and to better reflect the dynamics of actions and events, through a substantive alignment of textual and visual information.

Evaluating Retrieval for Multi-domain Scientific Publications

no code implementations LREC 2022 Nancy Ide, Keith Suderman, Jingxuan Tu, Marc Verhagen, Shanan Peters, Ian Ross, John Lawson, Andrew Borg, James Pustejovsky

This paper provides an overview of the xDD/LAPPS Grid framework and provides results of evaluating the AskMe retrievalengine using the BEIR benchmark datasets.

Retrieval

Competence-based Question Generation

no code implementations COLING 2022 Jingxuan Tu, Kyeongmin Rim, James Pustejovsky

Models of natural language understanding often rely on question answering and logical inference benchmark challenges to evaluate the performance of a system.

Natural Language Understanding Question Answering +2

Dense Paraphrasing for Textual Enrichment

no code implementations20 Oct 2022 Jingxuan Tu, Kyeongmin Rim, Eben Holderness, James Pustejovsky

Understanding inferences and answering questions from text requires more than merely recovering surface arguments, adjuncts, or strings associated with the query terms.

Sentence

Common Ground Tracking in Multimodal Dialogue

1 code implementation26 Mar 2024 Ibrahim Khebour, Kenneth Lai, Mariah Bradford, Yifan Zhu, Richard Brutti, Christopher Tam, Jingxuan Tu, Benjamin Ibarra, Nathaniel Blanchard, Nikhil Krishnaswamy, James Pustejovsky

Within Dialogue Modeling research in AI and NLP, considerable attention has been spent on ``dialogue state tracking'' (DST), which is the ability to update the representations of the speaker's needs at each turn in the dialogue by taking into account the past dialogue moves and history.

Dialogue State Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.