Search Results for author: Abhinav Shukla

Found 7 papers, 1 papers with code

GRID: A Platform for General Robot Intelligence Development

1 code implementation2 Oct 2023 Sai Vemprala, Shuhang Chen, Abhinav Shukla, Dinesh Narayanan, Ashish Kapoor

In addition, the modular design enables various deep ML components and existing foundation models to be easily usable in a wider variety of robot-centric problems.

Egocentric Auditory Attention Localization in Conversations

no code implementations CVPR 2023 Fiona Ryan, Hao Jiang, Abhinav Shukla, James M. Rehg, Vamsi Krishna Ithapu

In a noisy conversation environment such as a dinner party, people often exhibit selective auditory attention, or the ability to focus on a particular speaker while tuning out others.

Learning Speech Representations from Raw Audio by Joint Audiovisual Self-Supervision

no code implementations8 Jul 2020 Abhinav Shukla, Stavros Petridis, Maja Pantic

This enriches the audio encoder with visual information and the encoder can be used for evaluation without the visual modality.

Acoustic Scene Classification Action Recognition +3

Does Visual Self-Supervision Improve Learning of Speech Representations for Emotion Recognition?

no code implementations4 May 2020 Abhinav Shukla, Stavros Petridis, Maja Pantic

Our results demonstrate the potential of visual self-supervision for audio feature learning and suggest that joint visual and audio self-supervision leads to more informative audio representations for speech and emotion recognition.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements

no code implementations14 Aug 2018 Abhinav Shukla, Harish Katti, Mohan Kankanhalli, Ramanathan Subramanian

Contrary to the popular notion that ad affect hinges on the narrative and the clever use of linguistic and social cues, we find that actively attended objects and the coarse scene structure better encode affective information as compared to individual scene objects or conspicuous background elements.

Cannot find the paper you are looking for? You can Submit a new open access paper.