Search Results for author: Nikhil Krishnaswamy

Found 33 papers, 4 papers with code

How Good is the Model in Model-in-the-loop Event Coreference Resolution Annotation?

1 code implementation6 Jun 2023 Shafiuddin Rehan Ahmed, Abhijnan Nath, Michael Regan, Adam Pollins, Nikhil Krishnaswamy, James H. Martin

Annotating cross-document event coreference links is a time-consuming and cognitively demanding task that can compromise annotation quality and efficiency.

coreference-resolution Event Coreference Resolution

Generating Simulations of Motion Events from Verbal Descriptions

no code implementations SEMEVAL 2014 James Pustejovsky, Nikhil Krishnaswamy

The generated simulations act as a conceptual "debugger" for the semantics of different motion verbs: that is, by testing for consistency and informativeness in the model, simulations expose the presuppositions associated with linguistic expressions and their compositions.

Informativeness Unity

VoxML: A Visualization Modeling Language

no code implementations LREC 2016 James Pustejovsky, Nikhil Krishnaswamy

We present the specification for a modeling language, VoxML, which encodes semantic knowledge of real-world objects represented as three-dimensional models, and of events and attributes related to and enacted over these objects.

ECAT: Event Capture Annotation Tool

no code implementations5 Oct 2016 Tuan Do, Nikhil Krishnaswamy, James Pustejovsky

This paper introduces the Event Capture Annotation Tool (ECAT), a user-friendly, open-source interface tool for annotating events and their participants in video, capable of extracting the 3D positions and orientations of objects in video captured by Microsoft's Kinect(R) hardware.

Attribute

Multimodal Semantic Simulations of Linguistically Underspecified Motion Events

no code implementations3 Oct 2016 Nikhil Krishnaswamy, James Pustejovsky

In this paper, we describe a system for generating three-dimensional visual simulations of natural language motion expressions.

Combining Deep Learning and Qualitative Spatial Reasoning to Learn Complex Structures from Sparse Examples with Noise

no code implementations27 Nov 2018 Nikhil Krishnaswamy, Scott Friedman, James Pustejovsky

We present a novel approach to introducing new spatial structures to an AI agent, combining deep learning over qualitative spatial relations with various heuristic search algorithms.

Every Object Tells a Story

no code implementations COLING 2018 James Pustejovsky, Nikhil Krishnaswamy

Most work within the computational event modeling community has tended to focus on the interpretation and ordering of events that are associated with verbs and event nominals in linguistic expressions.

Object

The Development of Multimodal Lexical Resources

no code implementations WS 2016 James Pustejovsky, Tuan Do, Gitit Kehat, Nikhil Krishnaswamy

Human communication is a multimodal activity, involving not only speech and written expressions, but intonation, images, gestures, visual clues, and the interpretation of actions through perception.

Question Answering Visual Question Answering (VQA)

Situational Grounding within Multimodal Simulations

no code implementations5 Feb 2019 James Pustejovsky, Nikhil Krishnaswamy

In this paper, we argue that simulation platforms enable a novel type of embodied spatial reasoning, one facilitated by a formal model of object and event semantics that renders the continuous quantitative search space of an open-world, real-time environment tractable.

Novel Concepts

Building Multimodal Simulations for Natural Language

no code implementations EACL 2017 James Pustejovsky, Nikhil Krishnaswamy

Simulation and automatic visualization of events from natural language descriptions and supplementary modalities, such as gestures, allows humans to use their native capabilities as linguistic and visual interpreters to collaborate on tasks with an artificial agent or to put semantic intuitions to the test in an environment where user and agent share a common context. In previous work (Pustejovsky and Krishnaswamy, 2014; Pustejovsky, 2013a), we introduced a method for modeling natural language expressions within a 3D simulation environment built on top of the game development platform Unity (Goldstone, 2009).

Formal Logic Referring Expression +3

Multimodal Continuation-style Architectures for Human-Robot Interaction

no code implementations18 Sep 2019 Nikhil Krishnaswamy, James Pustejovsky

We present an architecture for integrating real-time, multimodal input into a computational agent's contextual model.

One-Shot Learning

Generating a Novel Dataset of Multimodal Referring Expressions

no code implementations WS 2019 Nikhil Krishnaswamy, James Pustejovsky

Referring expressions and definite descriptions of objects in space exploit information both about object characteristics and locations.

A Formal Analysis of Multimodal Referring Strategies Under Common Ground

no code implementations LREC 2020 Nikhil Krishnaswamy, James Pustejovsky

In this paper, we present an analysis of computationally generated mixed-modality definite referring expressions using combinations of gesture and linguistic descriptions.

Situated Multimodal Control of a Mobile Robot: Navigation through a Virtual Environment

no code implementations13 Jul 2020 Katherine Krajovic, Nikhil Krishnaswamy, Nathaniel J. Dimick, R. Pito Salas, James Pustejovsky

We present a new interface for controlling a navigation robot in novel environments using coordinated gesture and language.

Robot Navigation

Neurosymbolic AI for Situated Language Understanding

no code implementations5 Dec 2020 Nikhil Krishnaswamy, James Pustejovsky

In recent years, data-intensive AI, particularly the domain of natural language processing and understanding, has seen significant progress driven by the advent of large datasets and deep neural networks that have sidelined more classic AI approaches to the field.

Embodied Multimodal Agents to Bridge the Understanding Gap

no code implementations EACL (HCINLP) 2021 Nikhil Krishnaswamy, Nada Alalyani

In this paper we argue that embodied multimodal agents, i. e., avatars, can play an important role in moving natural language processing toward “deep understanding.” Fully-featured interactive agents, model encounters between two “people,” but a language-only agent has little environmental and situational awareness.

Exploiting Embodied Simulation to Detect Novel Object Classes Through Interaction

no code implementations17 Apr 2022 Nikhil Krishnaswamy, Sadaf Ghaffari

In this paper we present a novel method for a naive agent to detect novel objects it encounters in an interaction.

Object

Where Am I and Where Should I Go? Grounding Positional and Directional Labels in a Disoriented Human Balancing Task

no code implementations CLASP 2022 Sheikh Mannan, Nikhil Krishnaswamy

In this paper, we present an approach toward grounding linguistic positional and directional labels directly to human motions in the course of a disoriented balancing task in a multi-axis rotational device.

The VoxWorld Platform for Multimodal Embodied Agents

no code implementations LREC 2022 Nikhil Krishnaswamy, William Pickard, Brittany Cates, Nathaniel Blanchard, James Pustejovsky

We present a five-year retrospective on the development of the VoxWorld platform, first introduced as a multimodal platform for modeling motion language, that has evolved into a platform for rapidly building and deploying embodied agents with contextual and situational awareness, capable of interacting with humans in multiple modalities, and exploring their environments.

Phonetic, Semantic, and Articulatory Features in Assamese-Bengali Cognate Detection

no code implementations VarDial (COLING) 2022 Abhijnan Nath, Rahul Ghosh, Nikhil Krishnaswamy

In this paper, we propose a method to detect if words in two similar languages, Assamese and Bengali, are cognates.

Detecting and Accommodating Novel Types and Concepts in an Embodied Simulation Environment

no code implementations8 Nov 2022 Sadaf Ghaffari, Nikhil Krishnaswamy

In this paper, we present methods for two types of metacognitive tasks in an AI system: rapidly expanding a neural classification model to accommodate a new category of object, and recognizing when a novel object type is observed instead of misclassifying the observation as a known class.

Object Vocal Bursts Type Prediction

An Abstract Specification of VoxML as an Annotation Language

no code implementations22 May 2023 Kiyong Lee, Nikhil Krishnaswamy, James Pustejovsky

VoxML is a modeling language used to map natural language expressions into real-time visualizations using commonsense semantic knowledge of objects and events.

Human-Object Interaction Detection Object

Grounding and Distinguishing Conceptual Vocabulary Through Similarity Learning in Embodied Simulations

no code implementations23 May 2023 Sadaf Ghaffari, Nikhil Krishnaswamy

We present a novel method for using agent experiences gathered through an embodied simulation to ground contextualized word vectors to object representations.

Attribute Object

AxomiyaBERTa: A Phonologically-aware Transformer Model for Assamese

1 code implementation23 May 2023 Abhijnan Nath, Sheikh Mannan, Nikhil Krishnaswamy

Moreover, we show that AxomiyaBERTa can leverage phonological signals for even more challenging tasks, such as a novel cross-document coreference task on a translated version of the ECB+ corpus, where we present a new SOTA result for an LRL.

Language Modelling Masked Language Modeling +3

How Good is Automatic Segmentation as a Multimodal Discourse Annotation Aid?

no code implementations27 May 2023 Corbin Terpstra, Ibrahim Khebour, Mariah Bradford, Brett Wisniewski, Nikhil Krishnaswamy, Nathaniel Blanchard

We (1) manually transcribe utterances in a dataset of triads collaboratively solving a problem involving dialogue and physical object manipulation, (2) annotate collaborative moves according to these gold-standard transcripts, and then (3) apply these annotations to utterances that have been automatically segmented using toolkits from Google and OpenAI's Whisper.

Segmentation

Exploring Failure Cases in Multimodal Reasoning About Physical Dynamics

no code implementations24 Feb 2024 Sadaf Ghaffari, Nikhil Krishnaswamy

In this paper, we present an exploration of LLMs' abilities to problem solve with physical reasoning in situated environments.

Language Modelling Multimodal Reasoning +2

Common Ground Tracking in Multimodal Dialogue

no code implementations26 Mar 2024 Ibrahim Khebour, Kenneth Lai, Mariah Bradford, Yifan Zhu, Richard Brutti, Christopher Tam, Jingxuan Tu, Benjamin Ibarra, Nathaniel Blanchard, Nikhil Krishnaswamy, James Pustejovsky

Within Dialogue Modeling research in AI and NLP, considerable attention has been spent on ``dialogue state tracking'' (DST), which is the ability to update the representations of the speaker's needs at each turn in the dialogue by taking into account the past dialogue moves and history.

Dialogue State Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.