no code implementations • EMNLP (DeeLIO) 2020 • Travis Goodwin, Dina Demner-Fushman
Deep neural networks have demonstrated high performance on many natural language processing (NLP) tasks that can be answered directly from text, and have struggled to solve NLP tasks requiring external (e. g., world) knowledge.
1 code implementation • COLING 2020 • Travis Goodwin, Max Savery, Dina Demner-Fushman
Recent work has shown that pre-trained Transformers obtain remarkable performance on many natural language processing tasks including automatic summarization.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Travis Goodwin, Max Savery, Dina Demner-Fushman
Automatic summarization research has traditionally focused on providing high quality general-purpose summaries of documents.
no code implementations • LREC 2016 • Travis Goodwin, S Harabagiu, a
Building a knowledge graph for representing common-sense knowledge in which concepts discerned from noun phrases are cast as vertices and lexicalized relations are cast as edges leads to learning the embeddings of common-sense knowledge accounting for semantic compositionality as well as implied knowledge.
no code implementations • LREC 2014 • Travis Goodwin, S Harabagiu, a
To perform inference on the graphical model, we describe a technique of smoothing the conditional likelihood of medical concepts by their semantically-similar belief values.
no code implementations • LREC 2012 • Kirk Roberts, Travis Goodwin, S Harabagiu, a M.
Events form complex predicate-argument structures that model the participants in the event, their roles, as well as the temporal and spatial grounding.