Search Results for author: Richard Brutti

Found 4 papers, 1 papers with code

Designing Multimodal Datasets for NLP Challenges

no code implementations12 May 2021 James Pustejovsky, Eben Holderness, Jingxuan Tu, Parker Glenn, Kyeongmin Rim, Kelley Lynch, Richard Brutti

In this paper, we argue that the design and development of multimodal datasets for natural language processing (NLP) challenges should be enhanced in two significant respects: to more broadly represent commonsense semantic inferences; and to better reflect the dynamics of actions and events, through a substantive alignment of textual and visual information.

Common Ground Tracking in Multimodal Dialogue

1 code implementation26 Mar 2024 Ibrahim Khebour, Kenneth Lai, Mariah Bradford, Yifan Zhu, Richard Brutti, Christopher Tam, Jingxuan Tu, Benjamin Ibarra, Nathaniel Blanchard, Nikhil Krishnaswamy, James Pustejovsky

Within Dialogue Modeling research in AI and NLP, considerable attention has been spent on ``dialogue state tracking'' (DST), which is the ability to update the representations of the speaker's needs at each turn in the dialogue by taking into account the past dialogue moves and history.

Dialogue State Tracking

Abstract Meaning Representation for Gesture

no code implementations LREC 2022 Richard Brutti, Lucia Donatelli, Kenneth Lai, James Pustejovsky

This paper presents Gesture AMR, an extension to Abstract Meaning Representation (AMR), that captures the meaning of gesture.

Cannot find the paper you are looking for? You can Submit a new open access paper.