Search Results for author: Tanmay Randhavane

Found 8 papers, 2 papers with code

Data-Driven Modeling of Group Entitativity in Virtual Environments

no code implementations28 Sep 2018 Aniket Bera, Tanmay Randhavane, Emily Kubin, Husam Shaik, Kurt Gray, Dinesh Manocha

We also present a novel interactive multi-agent simulation algorithm to model entitative groups and conduct a VR user study to validate the socio-emotional predictive power of our algorithm.

Graphics Human-Computer Interaction

Pedestrian Dominance Modeling for Socially-Aware Robot Navigation

no code implementations15 Oct 2018 Tanmay Randhavane, Aniket Bera, Emily Kubin, Austin Wang, Kurt Gray, Dinesh Manocha

We present a Pedestrian Dominance Model (PDM) to identify the dominance characteristics of pedestrians for robot navigation.

Robotics

Identifying Emotions from Walking using Affective and Deep Features

no code implementations14 Jun 2019 Tanmay Randhavane, Uttaran Bhattacharya, Kyra Kapsaskis, Kurt Gray, Aniket Bera, Dinesh Manocha

We also present an EWalk (Emotion Walk) dataset that consists of videos of walking individuals with gaits and labeled emotions.

Emotion Recognition

RoadTrack: Realtime Tracking of Road Agents in Dense and Heterogeneous Environments

1 code implementation25 Jun 2019 Rohan Chandra, Uttaran Bhattacharya, Tanmay Randhavane, Aniket Bera, Dinesh Manocha

We present a realtime tracking algorithm, RoadTrack, to track heterogeneous road-agents in dense traffic videos.

Robotics

FVA: Modeling Perceived Friendliness of Virtual Agents Using Movement Characteristics

no code implementations30 Jun 2019 Tanmay Randhavane, Aniket Bera, Kyra Kapsaskis, Kurt Gray, Dinesh Manocha

We also investigate the perception of a user in an AR setting and observe that an FVA has a statistically significant improvement in terms of the perceived friendliness and social presence of a user compared to an agent without the friendliness modeling.

STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits

1 code implementation28 Oct 2019 Uttaran Bhattacharya, Trisha Mittal, Rohan Chandra, Tanmay Randhavane, Aniket Bera, Dinesh Manocha

We use hundreds of annotated real-world gait videos and augment them with thousands of annotated synthetic gaits generated using a novel generative network called STEP-Gen, built on an ST-GCN based Conditional Variational Autoencoder (CVAE).

General Classification

The Liar's Walk: Detecting Deception with Gait and Gesture

no code implementations14 Dec 2019 Tanmay Randhavane, Uttaran Bhattacharya, Kyra Kapsaskis, Kurt Gray, Aniket Bera, Dinesh Manocha

We present a data-driven deep neural algorithm for detecting deceptive walking behavior using nonverbal cues like gaits and gestures.

Action Classification

Exploring the Sim2Real Gap Using Digital Twins

no code implementations ICCV 2023 Sruthi Sudhakar, Jon Hanzelka, Josh Bobillot, Tanmay Randhavane, Neel Joshi, Vibhav Vineet

An emerging alternative is to use synthetic data, but if the synthetic data is not similar enough to the real data, the performance is typically below that of training with real data.

Instance Segmentation object-detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.