Search Results for author: Ramanathan Subramanian

Found 20 papers, 5 papers with code

MAGIC-TBR: Multiview Attention Fusion for Transformer-based Bodily Behavior Recognition in Group Settings

1 code implementation19 Sep 2023 Surbhi Madan, Rishabh Jain, Gulshan Sharma, Ramanathan Subramanian, Abhinav Dhall

Bodily behavioral language is an important social cue, and its automated analysis helps in enhancing the understanding of artificial intelligence systems.

Pose Estimation

Efficient Labelling of Affective Video Datasets via Few-Shot & Multi-Task Contrastive Learning

1 code implementation4 Aug 2023 Ravikiran Parameshwara, Ibrahim Radwan, Akshay Asthana, Iman Abbasnejad, Ramanathan Subramanian, Roland Goecke

Whilst deep learning techniques have achieved excellent emotion prediction, they still require large amounts of labelled training data, which are (a) onerous and tedious to compile, and (b) prone to errors and biases.

Contrastive Learning Multi-Task Learning

Explainable Depression Detection via Head Motion Patterns

no code implementations23 Jul 2023 Monika Gahalawat, Raul Fernandez Rojas, Tanaya Guha, Ramanathan Subramanian, Roland Goecke

While depression has been studied via multimodal non-verbal behavioural cues, head motion behaviour has not received much attention as a biomarker.

Binary Classification Depression Detection

A Weakly Supervised Approach to Emotion-change Prediction and Improved Mood Inference

no code implementations12 Jun 2023 Soujanya Narayana, Ibrahim Radwan, Ravikiran Parameshwara, Iman Abbasnejad, Akshay Asthana, Ramanathan Subramanian, Roland Goecke

Whilst a majority of affective computing research focuses on inferring emotions, examining mood or understanding the \textit{mood-emotion interplay} has received significantly less attention.

Metric Learning

Explainable Human-centered Traits from Head Motion and Facial Expression Dynamics

no code implementations20 Feb 2023 Surbhi Madan, Monika Gahalawat, Tanaya Guha, Roland Goecke, Ramanathan Subramanian

We explore the efficacy of multimodal behavioral cues for explainable prediction of personality and interview-specific traits.

Automated Parkinson's Disease Detection and Affective Analysis from Emotional EEG Signals

1 code implementation21 Feb 2022 Ravikiran Parameshwara, Soujanya Narayana, Murugappan Murugappan, Ramanathan Subramanian, Ibrahim Radwan, Roland Goecke

Employing traditional machine learning and deep learning methods, we explore (a) dimensional and categorical emotion recognition, and (b) PD vs HC classification from emotional EEG signals.

EEG Electroencephalogram (EEG) +2

Outlier-based Autism Detection using Longitudinal Structural MRI

no code implementations21 Feb 2022 Devika K, Venkata Ramana Murthy Oruganti, Dwarikanath Mahapatra, Ramanathan Subramanian

Among other findings, metrics employed for model training as well as reconstruction loss computation impact detection performance, and the coronal modality is found to best encode structural information for ASD detection.

Generative Adversarial Network Outlier Detection

Head Matters: Explainable Human-centered Trait Prediction from Head Motion Dynamics

no code implementations15 Dec 2021 Surbhi Madan, Monika Gahalawat, Tanaya Guha, Ramanathan Subramanian

We demonstrate the utility of elementary head-motion units termed kinemes for behavioral analytics to predict personality and interview traits.

FakeBuster: A DeepFakes Detection Tool for Video Conferencing Scenarios

no code implementations9 Jan 2021 Vineet Mehta, Parul Gupta, Ramanathan Subramanian, Abhinav Dhall

This paper proposes a new DeepFake detector FakeBuster for detecting impostors during video conferencing and manipulated faces on social media.

Face Swapping

Characterizing Hirability via Personality and Behavior

no code implementations22 Jun 2020 Harshit Malik, Hersh Dhillon, Roland Goecke, Ramanathan Subramanian

Modeling hirability as a discrete/continuous variable with the \emph{big-five} personality traits as predictors, we utilize (a) apparent personality annotations, and (b) personality estimates obtained via audio, visual and textual cues for hirability prediction (HP).

Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements

no code implementations14 Aug 2018 Abhinav Shukla, Harish Katti, Mohan Kankanhalli, Ramanathan Subramanian

Contrary to the popular notion that ad affect hinges on the narrative and the clever use of linguistic and social cues, we find that actively attended objects and the coarse scene structure better encode affective information as compared to individual scene objects or conspicuous background elements.

AVEID: Automatic Video System for Measuring Engagement In Dementia

no code implementations21 Dec 2017 Viral Parekh, Pin Sym Foong, Shendong Zhao, Ramanathan Subramanian

Engagement in dementia is typically measured using behavior observational scales (BOS) that are tedious and involve intensive manual labor to annotate, and are therefore not easily scalable.

An EEG-based Image Annotation System

no code implementations7 Nov 2017 Viral Parekh, Ramanathan Subramanian, Dipanjan Roy, C. V. Jawahar

The success of deep learning in computer vision has greatly increased the need for annotated image datasets.

EEG Electroencephalogram (EEG) +1

Evaluating Crowdsourcing Participants in the Absence of Ground-Truth

no code implementations30 May 2016 Ramanathan Subramanian, Romer Rosales, Glenn Fung, Jennifer Dy

Given a supervised/semi-supervised learning scenario where multiple annotators are available, we consider the problem of identification of adversarial or unreliable annotators.

Uncovering Interactions and Interactors: Joint Estimation of Head, Body Orientation and F-Formations From Surveillance Videos

no code implementations ICCV 2015 Elisa Ricci, Jagannadan Varadarajan, Ramanathan Subramanian, Samuel Rota Bulo, Narendra Ahuja, Oswald Lanz

We present a novel approach for jointly estimating tar- gets' head, body orientations and conversational groups called F-formations from a distant social scene (e. g., a cocktail party captured by surveillance cameras).

TAR

SALSA: A Novel Dataset for Multimodal Group Behavior Analysis

no code implementations23 Jun 2015 Xavier Alameda-Pineda, Jacopo Staiano, Ramanathan Subramanian, Ligia Batrinca, Elisa Ricci, Bruno Lepri, Oswald Lanz, Nicu Sebe

Studying free-standing conversational groups (FCGs) in unstructured social settings (e. g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels.

Cannot find the paper you are looking for? You can Submit a new open access paper.