We introduce semantic form mid-tuning, an approach for transferring semantic knowledge from semantic meaning representations into transformer-based language encoders.
We re-examine the situation entity (SE) classification task with varying amounts of available training data.
We propose a learning system in which language is grounded in visual percepts without specific pre-defined categories of terms.
We propose a Bi-Directional Manifold Alignment (BDMA) that learns a non-linear mapping between two manifolds by explicitly training it to be bijective.
Ordering the selection of training data using active learning can lead to improvements in learning efficiently from smaller corpora.
We show how to learn a neural topic model with discrete random variables---one that explicitly models each word's assigned topic---using neural variational inference that does not rely on stochastic backpropagation to handle the discrete variables.
We demonstrate the complementary natures of neural knowledge graph embedding, fine-grain entity type prediction, and neural language modeling.
Within the context of event modeling and understanding, we propose a new method for neural sequence modeling that takes partially-observed sequences of discrete, external knowledge into account.
no code implementations • 29 Jul 2020 • Patrick Jenkins, Rishabh Sachdeva, Gaoussou Youssouf Kebe, Padraig Higgins, Kasra Darvish, Edward Raff, Don Engel, John Winder, Francis Ferraro, Cynthia Matuszek
Grounded language acquisition -- learning how language-based interactions refer to the world around them -- is amajor area of research in robotics, NLP, and HCI.
We present a locality preserving loss (LPL) that improves the alignment between vector space embeddings while separating uncorrelated representations.
1 code implementation • • Aaron Steven White, Elias Stengel-Eskin, Siddharth Vashishtha, Venkata Govindarajan, Dee Ann Reisinger, Tim Vieira, Keisuke Sakaguchi, Sheng Zhang, Francis Ferraro, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme
We present the Universal Decompositional Semantics (UDS) dataset (v1. 0), which is bundled with the Decomp toolkit (v0. 1).
In this paper we describe a multilingual grounded language learning system adapted from an English-only system.
We present a family of novel methods for embedding knowledge graphs into real-valued tensors.
Judging the veracity of a sentence making one or more claims is an important and challenging problem with many dimensions.
We describe the systems developed by the UMBC team for 2018 SemEval Task 8, SecureNLP (Semantic Extraction from CybersecUrity REports using Natural Language Processing).
We study how different frame annotations complement one another when learning continuous lexical semantics.
1 code implementation • • Ting-Hao, Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, Margaret Mitchell
We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling.
Integrating vision and language has long been a dream in work on artificial intelligence (AI).