Search Results for author: Shachi H. Kumar

Found 12 papers, 1 papers with code

CueBot: Cue-Controlled Response Generation for Assistive Interaction Usages

no code implementations SLPAT (ACL) 2022 Shachi H. Kumar, Hsuan Su, Ramesh Manuvinakurike, Max Pinaroc, Sai Prasad, Saurav Sahay, Lama Nachman

Conversational assistants are ubiquitous among the general population, however, these systems have not had an impact on people with disabilities, or speech and language disorders, for whom basic day-to-day communication and social interaction is a huge struggle.

Language Modelling Response Generation

Introducing v0.5 of the AI Safety Benchmark from MLCommons

1 code implementation18 Apr 2024 Bertie Vidgen, Adarsh Agrawal, Ahmed M. Ahmed, Victor Akinwande, Namir Al-Nuaimi, Najla Alfaraj, Elie Alhajjar, Lora Aroyo, Trupti Bavalatti, Borhane Blili-Hamelin, Kurt Bollacker, Rishi Bomassani, Marisa Ferrara Boston, Siméon Campos, Kal Chakra, Canyu Chen, Cody Coleman, Zacharie Delpierre Coudert, Leon Derczynski, Debojyoti Dutta, Ian Eisenberg, James Ezick, Heather Frase, Brian Fuller, Ram Gandikota, Agasthya Gangavarapu, Ananya Gangavarapu, James Gealy, Rajat Ghosh, James Goel, Usman Gohar, Sujata Goswami, Scott A. Hale, Wiebke Hutiri, Joseph Marvin Imperial, Surgan Jandial, Nick Judd, Felix Juefei-Xu, Foutse khomh, Bhavya Kailkhura, Hannah Rose Kirk, Kevin Klyman, Chris Knotz, Michael Kuchnik, Shachi H. Kumar, Chris Lengerich, Bo Li, Zeyi Liao, Eileen Peters Long, Victor Lu, Yifan Mai, Priyanka Mary Mammen, Kelvin Manyeki, Sean McGregor, Virendra Mehta, Shafee Mohammed, Emanuel Moss, Lama Nachman, Dinesh Jinenhally Naganna, Amin Nikanjam, Besmira Nushi, Luis Oala, Iftach Orr, Alicia Parrish, Cigdem Patlak, William Pietri, Forough Poursabzi-Sangdeh, Eleonora Presani, Fabrizio Puletti, Paul Röttger, Saurav Sahay, Tim Santos, Nino Scherrer, Alice Schoenauer Sebag, Patrick Schramowski, Abolfazl Shahbazi, Vin Sharma, Xudong Shen, Vamsi Sistla, Leonard Tang, Davide Testuggine, Vithursan Thangarasa, Elizabeth Anne Watkins, Rebecca Weiss, Chris Welty, Tyler Wilbers, Adina Williams, Carole-Jean Wu, Poonam Yadav, Xianjun Yang, Yi Zeng, Wenhui Zhang, Fedor Zhdanov, Jiacheng Zhu, Percy Liang, Peter Mattson, Joaquin Vanschoren

We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0. 5 benchmark.

Audio-Visual Understanding of Passenger Intents for In-Cabin Conversational Agents

no code implementations WS 2020 Eda Okur, Shachi H. Kumar, Saurav Sahay, Lama Nachman

To this end, understanding passenger intents from spoken interactions and vehicle vision systems is a crucial component for developing contextual and visually grounded conversational agents for AV.

Dialogue Understanding Intent Detection

Low Rank Fusion based Transformers for Multimodal Sequences

no code implementations WS 2020 Saurav Sahay, Eda Okur, Shachi H. Kumar, Lama Nachman

In this work, we experiment with modeling modality-specific sensory signals to attend to our latent multimodal emotional intentions and vice versa expressed via low-rank multimodal fusion and multimodal transformers.

Emotion Recognition

Leveraging Topics and Audio Features with Multimodal Attention for Audio Visual Scene-Aware Dialog

no code implementations20 Dec 2019 Shachi H. Kumar, Eda Okur, Saurav Sahay, Jonathan Huang, Lama Nachman

With the recent advancements in Artificial Intelligence (AI), Intelligent Virtual Assistants (IVA) such as Alexa, Google Home, etc., have become a ubiquitous part of many homes.

Audio Classification Response Generation

Modeling Intent, Dialog Policies and Response Adaptation for Goal-Oriented Interactions

no code implementations20 Dec 2019 Saurav Sahay, Shachi H. Kumar, Eda Okur, Haroon Syed, Lama Nachman

Building a machine learning driven spoken dialog system for goal-oriented interactions involves careful design of intents and data collection along with development of intent recognition models and dialog policy learning algorithms.

Intent Recognition

Exploring Context, Attention and Audio Features for Audio Visual Scene-Aware Dialog

no code implementations20 Dec 2019 Shachi H. Kumar, Eda Okur, Saurav Sahay, Jonathan Huang, Lama Nachman

Recent progress in visual grounding techniques and Audio Understanding are enabling machines to understand shared semantic concepts and listen to the various sensory events in the environment.

Audio Classification Visual Grounding

Towards Multimodal Understanding of Passenger-Vehicle Interactions in Autonomous Vehicles: Intent/Slot Recognition Utilizing Audio-Visual Data

no code implementations20 Sep 2019 Eda Okur, Shachi H. Kumar, Saurav Sahay, Lama Nachman

Understanding passenger intents from spoken interactions and car's vision (both inside and outside the vehicle) are important building blocks towards developing contextual dialog systems for natural interactions in autonomous vehicles (AV).

Autonomous Vehicles Intent Detection +2

Natural Language Interactions in Autonomous Vehicles: Intent Detection and Slot Filling from Passenger Utterances

no code implementations23 Apr 2019 Eda Okur, Shachi H. Kumar, Saurav Sahay, Asli Arslan Esme, Lama Nachman

Understanding passenger intents and extracting relevant slots are important building blocks towards developing contextual dialogue systems for natural interactions in autonomous vehicles (AV).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

Cannot find the paper you are looking for? You can Submit a new open access paper.