We present a data-driven deep neural algorithm for detecting deceptive walking behavior using nonverbal cues like gaits and gestures.
We use hundreds of annotated real-world gait videos and augment them with thousands of annotated synthetic gaits generated using a novel generative network called STEP-Gen, built on an ST-GCN based Conditional Variational Autoencoder (CVAE).
We also investigate the perception of a user in an AR setting and observe that an FVA has a statistically significant improvement in terms of the perceived friendliness and social presence of a user compared to an agent without the friendliness modeling.
We present a realtime tracking algorithm, RoadTrack, to track heterogeneous road-agents in dense traffic videos.
We also present an EWalk (Emotion Walk) dataset that consists of videos of walking individuals with gaits and labeled emotions.
We present a Pedestrian Dominance Model (PDM) to identify the dominance characteristics of pedestrians for robot navigation.
We also present a novel interactive multi-agent simulation algorithm to model entitative groups and conduct a VR user study to validate the socio-emotional predictive power of our algorithm.
Graphics Human-Computer Interaction