Search Results for author: James Pustejovsky

Found 86 papers, 3 papers with code

A Continuation Semantics for Abstract Meaning Representation

1 code implementation DMR (COLING) 2020 Kenneth Lai, Lucia Donatelli, James Pustejovsky

Abstract Meaning Representation (AMR) is a simple, expressive semantic framework whose emphasis on predicate-argument structure is effective for many tasks.


Competence-based Question Generation

no code implementations COLING 2022 Jingxuan Tu, Kyeongmin Rim, James Pustejovsky

Models of natural language understanding often rely on question answering and logical inference benchmark challenges to evaluate the performance of a system.

Natural Language Understanding Question Answering +2

Evaluating Retrieval for Multi-domain Scientific Publications

no code implementations LREC 2022 Nancy Ide, Keith Suderman, Jingxuan Tu, Marc Verhagen, Shanan Peters, Ian Ross, John Lawson, Andrew Borg, James Pustejovsky

This paper provides an overview of the xDD/LAPPS Grid framework and provides results of evaluating the AskMe retrievalengine using the BEIR benchmark datasets.


The CLAMS Platform at Work: Processing Audiovisual Data from the American Archive of Public Broadcasting

no code implementations LREC 2022 Marc Verhagen, Kelley Lynch, Kyeongmin Rim, James Pustejovsky

The Computational Linguistics Applications for Multimedia Services (CLAMS) platform provides access to computational content analysis tools for multimedia material.

Abstract Meaning Representation for Gesture

no code implementations LREC 2022 Richard Brutti, Lucia Donatelli, Kenneth Lai, James Pustejovsky

This paper presents Gesture AMR, an extension to Abstract Meaning Representation (AMR), that captures the meaning of gesture.

The VoxWorld Platform for Multimodal Embodied Agents

no code implementations LREC 2022 Nikhil Krishnaswamy, William Pickard, Brittany Cates, Nathaniel Blanchard, James Pustejovsky

We present a five-year retrospective on the development of the VoxWorld platform, first introduced as a multimodal platform for modeling motion language, that has evolved into a platform for rapidly building and deploying embodied agents with contextual and situational awareness, capable of interacting with humans in multiple modalities, and exploring their environments.

An Abstract Specification of VoxML as an Annotation Language

no code implementations22 May 2023 Kiyong Lee, Nikhil Krishnaswamy, James Pustejovsky

VoxML is a modeling language used to map natural language expressions into real-time visualizations using commonsense semantic knowledge of objects and events.

Human-Object Interaction Detection

Causal schema induction for knowledge discovery

1 code implementation27 Mar 2023 Michael Regan, Jena D. Hwang, Keisuke Sakaguchi, James Pustejovsky

In this work, we investigate how to apply schema induction models to the task of knowledge discovery for enhanced search of English-language news texts.

Dense Paraphrasing for Textual Enrichment

no code implementations20 Oct 2022 Jingxuan Tu, Kyeongmin Rim, Eben Holderness, James Pustejovsky

Understanding inferences and answering questions from text requires more than merely recovering surface arguments, adjuncts, or strings associated with the query terms.

Representing Inferences and their Lexicalization

no code implementations14 Dec 2021 David McDonald, James Pustejovsky

We have recently begun a project to develop a more effective and efficient way to marshal inferences from background knowledge to facilitate deep natural language understanding.

Natural Language Understanding

Neural Metaphor Detection with Visibility Embeddings

no code implementations Joint Conference on Lexical and Computational Semantics 2021 Gitit Kehat, James Pustejovsky

We present new results for the problem of sequence metaphor labeling, using the recently developed Visibility Embeddings.

Designing Multimodal Datasets for NLP Challenges

no code implementations12 May 2021 James Pustejovsky, Eben Holderness, Jingxuan Tu, Parker Glenn, Kyeongmin Rim, Kelley Lynch, Richard Brutti

In this paper, we argue that the design and development of multimodal datasets for natural language processing (NLP) challenges should be enhanced in two significant respects: to more broadly represent commonsense semantic inferences; and to better reflect the dynamics of actions and events, through a substantive alignment of textual and visual information.

Neurosymbolic AI for Situated Language Understanding

no code implementations5 Dec 2020 Nikhil Krishnaswamy, James Pustejovsky

In recent years, data-intensive AI, particularly the domain of natural language processing and understanding, has seen significant progress driven by the advent of large datasets and deep neural networks that have sidelined more classic AI approaches to the field.

A Two-Level Interpretation of Modality in Human-Robot Dialogue

no code implementations COLING 2020 Lucia Donatelli, Kenneth Lai, James Pustejovsky

We analyze the use and interpretation of modal expressions in a corpus of situated human-robot dialogue and ask how to effectively represent these expressions for automatic learning.

Vocal Bursts Valence Prediction

Situated Multimodal Control of a Mobile Robot: Navigation through a Virtual Environment

no code implementations13 Jul 2020 Katherine Krajovic, Nikhil Krishnaswamy, Nathaniel J. Dimick, R. Pito Salas, James Pustejovsky

We present a new interface for controlling a navigation robot in novel environments using coordinated gesture and language.

Robot Navigation

Exploration and Discovery of the COVID-19 Literature through Semantic Visualization

no code implementations NAACL 2021 Jingxuan Tu, Marc Verhagen, Brent Cochran, James Pustejovsky

We are developing semantic visualization techniques in order to enhance exploration and enable discovery over large datasets of complex networks of relations.

Knowledge Graphs TAG

Interchange Formats for Visualization: LIF and MMIF

no code implementations LREC 2020 Kyeongmin Rim, Kelley Lynch, Marc Verhagen, Nancy Ide, James Pustejovsky

Promoting interoperrable computational linguistics (CL) and natural language processing (NLP) application platforms and interchange-able data formats have contributed improving discoverabilty and accessbility of the openly available NLP software.

Data Visualization

Improving Neural Metaphor Detection with Visual Datasets

no code implementations LREC 2020 Gitit Kehat, James Pustejovsky

We present new results on Metaphor Detection by using text from visual datasets.

A Formal Analysis of Multimodal Referring Strategies Under Common Ground

no code implementations LREC 2020 Nikhil Krishnaswamy, James Pustejovsky

In this paper, we present an analysis of computationally generated mixed-modality definite referring expressions using combinations of gesture and linguistic descriptions.

Multimodal Continuation-style Architectures for Human-Robot Interaction

no code implementations18 Sep 2019 Nikhil Krishnaswamy, James Pustejovsky

We present an architecture for integrating real-time, multimodal input into a computational agent's contextual model.

One-Shot Learning

Modeling Quantification and Scope in Abstract Meaning Representations

no code implementations WS 2019 James Pustejovsky, Ken Lai, Nianwen Xue

In this paper, we propose an extension to Abstract Meaning Representations (AMRs) to encode scope information of quantifiers and negation, in a way that overcomes the semantic gaps of the schema while maintaining its cognitive simplicity.

VerbNet Representations: Subevent Semantics for Transfer Verbs

no code implementations WS 2019 Susan Windisch Brown, Julia Bonn, James Gung, Annie Zaenen, James Pustejovsky, Martha Palmer

This paper announces the release of a new version of the English lexical resource VerbNet with substantially revised semantic representations designed to facilitate computer planning and reasoning based on human language.

Computational Linguistics Applications for Multimedia Services

no code implementations WS 2019 Kyeongmin Rim, Kelley Lynch, James Pustejovsky

We present Computational Linguistics Applications for Multimedia Services (CLAMS), a platform that provides access to computational content analysis tools for archival multimedia material that appear in different media, such as text, audio, image, and video.

A Dynamic Semantics for Causal Counterfactuals

no code implementations WS 2019 Kenneth Lai, James Pustejovsky

Under the standard approach to counterfactuals, to determine the meaning of a counterfactual sentence, we consider the {``}closest{''} possible world(s) where the antecedent is true, and evaluate the consequent.


Generating a Novel Dataset of Multimodal Referring Expressions

no code implementations WS 2019 Nikhil Krishnaswamy, James Pustejovsky

Referring expressions and definite descriptions of objects in space exploit information both about object characteristics and locations.

Distinguishing Clinical Sentiment: The Importance of Domain Adaptation in Psychiatric Patient Health Records

no code implementations WS 2019 Eben Holderness, Philip Cawkwell, Kirsten Bolton, James Pustejovsky, Mei-Hua Hall

In this study, we undertook, to our knowledge, the first domain adaptation of sentiment analysis to psychiatric EHRs by defining psychiatric clinical sentiment, performing an annotation project, and evaluating multiple sentence-level sentiment machine learning (ML) models.

BIG-bench Machine Learning Decision Making +2

Situational Grounding within Multimodal Simulations

no code implementations5 Feb 2019 James Pustejovsky, Nikhil Krishnaswamy

In this paper, we argue that simulation platforms enable a novel type of embodied spatial reasoning, one facilitated by a formal model of object and event semantics that renders the continuous quantitative search space of an open-world, real-time environment tractable.

Novel Concepts

Combining Deep Learning and Qualitative Spatial Reasoning to Learn Complex Structures from Sparse Examples with Noise

no code implementations27 Nov 2018 Nikhil Krishnaswamy, Scott Friedman, James Pustejovsky

We present a novel approach to introducing new spatial structures to an AI agent, combining deep learning over qualitative spatial relations with various heuristic search algorithms.

Every Object Tells a Story

no code implementations COLING 2018 James Pustejovsky, Nikhil Krishnaswamy

Most work within the computational event modeling community has tended to focus on the interpretation and ordering of events that are associated with verbs and event nominals in linguistic expressions.

Integrating Vision and Language Datasets to Measure Word Concreteness

no code implementations IJCNLP 2017 Gitit Kehat, James Pustejovsky

We present and take advantage of the inherent visualizability properties of words in visual corpora (the textual components of vision-language datasets) to compute concreteness scores for words.

Image Captioning Image Retrieval +1

Learning event representation: As sparse as possible, but not sparser

no code implementations2 Oct 2017 Tuan Do, James Pustejovsky

Selecting an optimal event representation is essential for event classification in real world contexts.

Classification General Classification +1

SemEval-2017 Task 12: Clinical TempEval

no code implementations SEMEVAL 2017 Steven Bethard, Guergana Savova, Martha Palmer, James Pustejovsky

Clinical TempEval 2017 aimed to answer the question: how well do systems trained on annotated timelines for one medical condition (colon cancer) perform in predicting timelines on another medical condition (brain cancer)?

Domain Adaptation Temporal Information Extraction

Building Multimodal Simulations for Natural Language

no code implementations EACL 2017 James Pustejovsky, Nikhil Krishnaswamy

Simulation and automatic visualization of events from natural language descriptions and supplementary modalities, such as gestures, allows humans to use their native capabilities as linguistic and visual interpreters to collaborate on tasks with an artificial agent or to put semantic intuitions to the test in an environment where user and agent share a common context. In previous work (Pustejovsky and Krishnaswamy, 2014; Pustejovsky, 2013a), we introduced a method for modeling natural language expressions within a 3D simulation environment built on top of the game development platform Unity (Goldstone, 2009).

Formal Logic Referring Expression +3

The Development of Multimodal Lexical Resources

no code implementations WS 2016 James Pustejovsky, Tuan Do, Gitit Kehat, Nikhil Krishnaswamy

Human communication is a multimodal activity, involving not only speech and written expressions, but intonation, images, gestures, visual clues, and the interpretation of actions through perception.

Question Answering Visual Question Answering (VQA)

LAPPS/Galaxy: Current State and Next Steps

no code implementations WS 2016 Nancy Ide, Keith Suderman, Eric Nyberg, James Pustejovsky, Marc Verhagen

The US National Science Foundation (NSF) SI2-funded LAPPS/Galaxy project has developed an open-source platform for enabling complex analyses while hiding complexities associated with underlying infrastructure, that can be accessed through a web interface, deployed on any Unix system, or run from the cloud.

Generating Simulations of Motion Events from Verbal Descriptions

no code implementations SEMEVAL 2014 James Pustejovsky, Nikhil Krishnaswamy

The generated simulations act as a conceptual "debugger" for the semantics of different motion verbs: that is, by testing for consistency and informativeness in the model, simulations expose the presuppositions associated with linguistic expressions and their compositions.

Informativeness Unity

VoxML: A Visualization Modeling Language

no code implementations LREC 2016 James Pustejovsky, Nikhil Krishnaswamy

We present the specification for a modeling language, VoxML, which encodes semantic knowledge of real-world objects represented as three-dimensional models, and of events and attributes related to and enacted over these objects.

ECAT: Event Capture Annotation Tool

no code implementations5 Oct 2016 Tuan Do, Nikhil Krishnaswamy, James Pustejovsky

This paper introduces the Event Capture Annotation Tool (ECAT), a user-friendly, open-source interface tool for annotating events and their participants in video, capable of extracting the 3D positions and orientations of objects in video captured by Microsoft's Kinect(R) hardware.

Multimodal Semantic Simulations of Linguistically Underspecified Motion Events

no code implementations3 Oct 2016 Nikhil Krishnaswamy, James Pustejovsky

In this paper, we describe a system for generating three-dimensional visual simulations of natural language motion expressions.

Annotation Methodologies for Vision and Language Dataset Creation

no code implementations10 Jul 2016 Gitit Kehat, James Pustejovsky

Annotated datasets are commonly used in the training and evaluation of tasks involving natural language and vision (image description generation, action recognition and visual question answering).

Action Recognition Question Answering +2

The Language Application Grid and Galaxy

no code implementations LREC 2016 Nancy Ide, Keith Suderman, James Pustejovsky, Marc Verhagen, Christopher Cieri

The NSF-SI2-funded LAPPS Grid project is a collaborative effort among Brandeis University, Vassar College, Carnegie-Mellon University (CMU), and the Linguistic Data Consortium (LDC), which has developed an open, web-based infrastructure through which resources can be easily accessed and within which tailored language services can be efficiently composed, evaluated, disseminated and consumed by researchers, developers, and students across a wide variety of disciplines.


The Language Application Grid

no code implementations LREC 2014 Nancy Ide, James Pustejovsky, Christopher Cieri, Eric Nyberg, Di Wang, Keith Suderman, Marc Verhagen, Jonathan Wright

The Language Application (LAPPS) Grid project is establishing a framework that enables language service discovery, composition, and reuse and promotes sustainability, manageability, usability, and interoperability of natural language Processing (NLP) components.

Machine Translation Question Answering +1

Identification of Technology Terms in Patents

no code implementations LREC 2014 Peter Anick, Marc Verhagen, James Pustejovsky

Natural language analysis of patents holds promise for the development of tools designed to assist analysts in the monitoring of emerging technologies.

BIG-bench Machine Learning

Image Annotation with ISO-Space: Distinguishing Content from Structure

no code implementations LREC 2014 James Pustejovsky, Zachary Yocum

This paper explores the use of ISO-Space, an annotation specification to capturing spatial information, for encoding spatial relations mentioned in descriptions of images.

Content-Based Image Retrieval Image Captioning +1

Clinical TempEval

no code implementations19 Mar 2014 Steven Bethard, Leon Derczynski, James Pustejovsky, Marc Verhagen

We describe the Clinical TempEval task which is currently in preparation for the SemEval-2015 evaluation exercise.

Word Sense Inventories by Non-Experts.

no code implementations LREC 2012 Anna Rumshisky, Nick Botchan, Sophie Kushkuley, James Pustejovsky

In this paper, we explore different strategies for implementing a crowdsourcing methodology for a single-step construction of an empirically-derived sense inventory and the corresponding sense-annotated corpus.

Word Sense Disambiguation

The TARSQI Toolkit

no code implementations LREC 2012 Marc Verhagen, James Pustejovsky

We present and demonstrate the updated version of the TARSQI Toolkit, a suite of temporal processing modules that extract temporal information from natural language texts.

Question Answering

The Role of Model Testing in Standards Development: The Case of ISO-Space

no code implementations LREC 2012 James Pustejovsky, Jessica Moszkowicz

In this paper, we describe the methodology being used to develop certain aspects of ISO-Space, an annotation language for encoding spatial and spatiotemporal information as expressed in natural language text.

ATLIS: Identifying Locational Information in Text Automatically

no code implementations LREC 2012 John Vogel, Marc Verhagen, James Pustejovsky

ATLIS (short for “ ATLIS Tags Locations in Strings”) is a tool being developed using a maximum-entropy machine learning model for automatically identifying information relating to spatial and locational information in natural language text.

SemEval-2010 Task 13: TempEval-2

no code implementations Proceedings of the 5th International Workshop on Semantic Evaluation 2010 Marc Verhagen, Roser Saurí, Tommaso Caselli, James Pustejovsky

Tempeval-2 comprises evaluation tasks for time expressions, events and temporal relations, the latter of which was split up in four sub tasks, motivated by the notion that smaller subtasks would make both data preparation and temporal relation extraction easier.

Event Extraction Temporal Relation Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.