no code implementations • EMNLP 2020 • Victor Martinez, Krishna Somandepalli, Yalda Tehranian-Uhls, Shrikanth Narayanan
Exposure to violent, sexual, or substance-abuse content in media increases the willingness of children and adolescents to imitate similar behaviors.
1 code implementation • 29 Apr 2024 • Hong Nguyen, Hoang Nguyen, Melinda Chang, Hieu Pham, Shrikanth Narayanan, Michael Pazzani
Understanding the severity of conditions shown in images in medical diagnosis is crucial, serving as a key guide for clinical assessment, treatment, as well as evaluating longitudinal progression.
no code implementations • 25 Mar 2024 • Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, Shrikanth Narayanan
The promise of ICL is that the LLM can adapt to perform the present task at a competitive or state-of-the-art level at a fraction of the cost.
no code implementations • 15 Feb 2024 • Aditya Kommineni, Kleanthis Avramidis, Richard Leahy, Shrikanth Narayanan
We also propose a novel knowledge-guided pre-training objective that accounts for the idiosyncrasies of the EEG signal.
no code implementations • 15 Feb 2024 • Kleanthis Avramidis, Melinda Y. Chang, Rahul Sharma, Mark S. Borchert, Shrikanth Narayanan
A wide range of neurological and cognitive disorders exhibit distinct behavioral markers aside from their clinical manifestations.
no code implementations • 14 Feb 2024 • Tiantian Feng, Daniel Yang, Digbalay Bose, Shrikanth Narayanan
Specifically, we propose a simple but effective multi-modal learning framework GTI-MM to enhance the data efficiency and model robustness against missing visual modality by imputing the missing data with generative transformers.
no code implementations • 24 Jan 2024 • Benjamin A. T. Grahama, Lauren Brown, Georgios Chochlakis, Morteza Dehghani, Raquel Delerme, Brittany Friedman, Ellie Graeden, Preni Golazizian, Rajat Hebbar, Parsa Hejabi, Aditya Kommineni, Mayagüez Salinas, Michael Sierra-Arévalo, Jackson Trager, Nicholas Weller, Shrikanth Narayanan
Interactions between the government officials and civilians affect public wellbeing and the state legitimacy that is necessary for the functioning of democratic society.
no code implementations • 5 Dec 2023 • Hong Nguyen, Cuong V. Nguyen, Shrikanth Narayanan, Benjamin Y. Xu, Michael Pazzani
Primary open-angle glaucoma (POAG) is a chronic and progressive optic nerve condition that results in an acquired loss of optic nerve fibers and potential blindness.
no code implementations • 6 Nov 2023 • Daniel Yang, Aditya Kommineni, Mohammad Alshehri, Nilamadhab Mohanty, Vedant Modi, Jonathan Gratch, Shrikanth Narayanan
In this work, we propose a formal definition of textual context to motivate a prompting strategy to enhance such contextual information.
no code implementations • 3 Oct 2023 • Anfeng Xu, Kevin Huang, Tiantian Feng, Helen Tager-Flusberg, Shrikanth Narayanan
Building on the foundation of an audio-only child-adult speaker classification pipeline, we propose incorporating visual cues through active speaker detection and visual processing models.
1 code implementation • 26 Sep 2023 • Kleanthis Avramidis, Dominika Kunc, Bartosz Perz, Kranti Adsul, Tiantian Feng, Przemysław Kazienko, Stanisław Saganowski, Shrikanth Narayanan
We train this model in a self-supervised manner with 275, 000 10s ECG recordings collected in the wild and evaluate it on a range of downstream tasks.
no code implementations • 18 Sep 2023 • Yoonsoo Nam, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta, Shrikanth Narayanan
Video summarization remains a huge challenge in computer vision due to the size of the input videos to be summarized.
no code implementations • 27 Aug 2023 • Digbalay Bose, Rajat Hebbar, Tiantian Feng, Krishna Somandepalli, Anfeng Xu, Shrikanth Narayanan
Advertisement videos (ads) play an integral part in the domain of Internet e-commerce as they amplify the reach of particular products to a broad audience or can serve as a medium to raise awareness about specific issues through concise narrative structures.
no code implementations • 31 Jul 2023 • Rimita Lahiri, Tiantian Feng, Rajat Hebbar, Catherine Lord, So Hyun Kim, Shrikanth Narayanan
We address the problem of detecting who spoke when in child-inclusive spoken interactions i. e., automatic child-adult speaker classification.
no code implementations • 10 Jul 2023 • Tiantian Feng, Brandon M Booth, Shrikanth Narayanan
In this work, we propose a novel wearable time-series mining framework, Hawkes point process On Time series clusters for ROutine Discovery (HOT-ROD), for uncovering behavioral routines from completely unlabeled wearable recordings.
no code implementations • 15 Jun 2023 • Tiantian Feng, Digbalay Bose, Tuo Zhang, Rajat Hebbar, Anil Ramakrishna, Rahul Gupta, Mi Zhang, Salman Avestimehr, Shrikanth Narayanan
In order to facilitate the research in multimodal FL, we introduce FedMultimodal, the first FL benchmark for multimodal learning covering five representative multimodal applications from ten commonly used datasets with a total of eight unique modalities.
no code implementations • 23 May 2023 • Anfeng Xu, Rajat Hebbar, Rimita Lahiri, Tiantian Feng, Lindsay Butler, Lue Shen, Helen Tager-Flusberg, Shrikanth Narayanan
This paper proposes applications of speech processing technologies in support of automated assessment of children's spoken language development by classification between child and adult speech and between speech and nonverbal vocalization in NLS, with respective F1 macro scores of 82. 6% and 67. 8%, underscoring the potential for accurate and scalable tools for ASD research and clinical use.
no code implementations • 17 Apr 2023 • Kleanthis Avramidis, Kranti Adsul, Digbalay Bose, Shrikanth Narayanan
This paper presents the approach and results of USC SAIL's submission to the Signal Processing Grand Challenge 2023 - e-Prevention (Task 2), on detecting relapses in psychotic patients.
no code implementations • 3 Apr 2023 • Nikolaos Antoniou, Athanasios Katsamanis, Theodoros Giannakopoulos, Shrikanth Narayanan
There is an imminent need for guidelines and standard test sets to allow direct and fair comparisons of speech emotion recognition (SER).
1 code implementation • 13 Mar 2023 • Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Shrikanth Narayanan
The process of human affect understanding involves the ability to infer person specific emotional states from various sources including images, speech, and language.
1 code implementation • 14 Feb 2023 • Rajat Hebbar, Digbalay Bose, Krishna Somandepalli, Veena Vijai, Shrikanth Narayanan
In this work, we present a dataset of audio events called Subtitle-Aligned Movie Sounds (SAM-S).
1 code implementation • 1 Dec 2022 • Rahul Sharma, Shrikanth Narayanan
Active speaker detection in videos addresses associating a source face, visible in the video frames, with the underlying speech in the audio modality.
Ranked #1 on Audio-Visual Active Speaker Detection on VPCD
no code implementations • 7 Nov 2022 • Rimita Lahiri, Md Nasir, Catherine Lord, So Hyun Kim, Shrikanth Narayanan
Vocal entrainment is a social adaptation mechanism in human interaction, knowledge of which can offer useful insights to an individual's cognitive-behavioral characteristics.
1 code implementation • 31 Oct 2022 • Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan
In this work, we study how we can build a single model that can transition between these different configurations by leveraging multilingual models and Demux, a transformer-based model whose input includes the emotions of interest, enabling us to dynamically change the emotions predicted by the model.
1 code implementation • 28 Oct 2022 • Kleanthis Avramidis, Tiantian Feng, Digbalay Bose, Shrikanth Narayanan
Detecting unsafe driving states, such as stress, drowsiness, and fatigue, is an important component of ensuring driving safety and an essential prerequisite for automatic intervention systems in vehicles.
1 code implementation • 28 Oct 2022 • Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, Shrikanth Narayanan
First, we develop two modeling approaches to the problem in order to capture word associations of the emotion words themselves, by either including the emotions in the input, or by leveraging Masked Language Modeling (MLM).
no code implementations • 25 Oct 2022 • Zhuohao Chen, Nikolaos Flemotomos, Zac E. Imel, David C. Atkins, Shrikanth Narayanan
In psychotherapy interactions, the quality of a session is assessed by codifying the communicative behaviors of participants during the conversation through manual observation and annotation.
1 code implementation • 20 Oct 2022 • Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin Cui, Kree Cole-McLaughlin, Huisheng Wang, Shrikanth Narayanan
Longform media such as movies have complex narrative structures, with events spanning a rich variety of ambient visual scenes.
1 code implementation • 24 Sep 2022 • Rahul Sharma, Shrikanth Narayanan
We leverage speaker identity information from speech and faces, and formulate active speaker detection as a speech-face assignment task such that the active speaker's face and the underlying speech identify the same person (character).
1 code implementation • 12 Sep 2022 • Xiaoyi Qin, Ming Li, Hui Bu, Shrikanth Narayanan, Haizhou Li
In addition, a supplementary set for the FFSVC2020 dataset is released this year.
1 code implementation • 18 Aug 2022 • Georgios Chochlakis, Tejas Srinivasan, Jesse Thomason, Shrikanth Narayanan
VAuLT is an extension of the popular Vision-and-Language Transformer (ViLT), and improves performance on vision-and-language (VL) tasks that involve more complex text inputs than image captions while having minimal impact on training and inference efficiency.
1 code implementation • 10 Jul 2022 • Kleanthis Avramidis, Mohammad Rostami, Melinda Chang, Shrikanth Narayanan
Papilledema is an ophthalmic neurologic disorder in which increased intracranial pressure leads to swelling of the optic nerves.
no code implementations • 28 Apr 2022 • Victor Ardulov, Torrey A. Creed, David C. Atkins, Shrikanth Narayanan
In order to increase mental health equity among the most vulnerable and marginalized communities, it is important to increase access to high-quality therapists.
no code implementations • 1 Apr 2022 • Nikolaos Flemotomos, Shrikanth Narayanan
Speaker clustering is an essential step in conventional speaker diarization systems and is typically addressed as an audio-only speech processing task.
no code implementations • 30 Mar 2022 • Rahul Sharma, Shrikanth Narayanan
Speaker diarization is one of the critical components of computational media intelligence as it enables a character-level analysis of story portrayals and media content understanding.
no code implementations • 29 Mar 2022 • Nicholas Mehlman, Anirudh Sreeram, Raghuveer Peri, Shrikanth Narayanan
A variety of recent works have looked into defenses for deep neural networks against adversarial attacks particularly within the image processing domain.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 21 Mar 2022 • Rahul Sharma, Shrikanth Narayanan
We curate a background character dataset which provides annotations for background character for a set of TV shows, and use it to evaluate the performance of the background character detection framework.
1 code implementation • 17 Mar 2022 • Raghuveer Peri, Krishna Somandepalli, Shrikanth Narayanan
In this paper, we systematically evaluate the biases present in speaker recognition systems with respect to gender across a range of system operating points.
1 code implementation • 15 Mar 2022 • Tiantian Feng, Shrikanth Narayanan
In this work, we propose a semi-supervised federated learning framework, Semi-FedSER, that utilizes both labeled and unlabeled data samples to address the challenge of limited labeled data samples in FL.
no code implementations • 13 Oct 2021 • Digbalay Bose, Krishna Somandepalli, Souvik Kundu, Rimita Lahiri, Jonathan Gratch, Shrikanth Narayanan
Computational modeling of the emotions evoked by art in humans is a challenging problem because of the subjective and nuanced nature of art and affective signals.
no code implementations • 11 Oct 2021 • Justin Olah, Sabyasachee Baruah, Digbalay Bose, Shrikanth Narayanan
Emotion recognition from text is a challenging task due to diverse emotion taxonomies, lack of reliable labeled data in different domains, and highly subjective annotation standards.
1 code implementation • 8 Oct 2021 • Sabyasachee Baruah, Krishna Somandepalli, Shrikanth Narayanan
We analyze the frequency and sentiment trends of different occupations, study the effect of media attributes like genre, country of production, and title type on these trends, and investigate if the incidence of professions in media subtitles correlate with their real-world employment statistics.
no code implementations • 3 Sep 2021 • Prashanth Gurunath Shivakumar, Somer Bishop, Catherine Lord, Shrikanth Narayanan
In this paper, we propose features specific to children and focus on speaker's phone duration as an important biomarker of children's age.
no code implementations • 12 Jul 2021 • Anirudh Sreeram, Nicholas Mehlman, Raghuveer Peri, Dillon Knox, Shrikanth Narayanan
In this paper we investigate speech denoising as a defense against adversarial attacks on automatic speech recognition (ASR) systems.
no code implementations • 15 Jun 2021 • Zhuohao Chen, Nikolaos Flemotomos, Karan Singla, Torrey A. Creed, David C. Atkins, Shrikanth Narayanan
In particular, we model the global quality as a linear function of the local quality scores, which allows us to update the segment-level quality estimates based on the session-level quality prediction.
no code implementations • 5 Apr 2021 • Haoqi Li, Yelin Kim, Cheng-Hao Kuo, Shrikanth Narayanan
Key challenges in developing generalized automatic emotion recognition systems include scarcity of labeled data and lack of gold-standard references.
no code implementations • 1 Apr 2021 • Haoqi Li, Brian Baucom, Shrikanth Narayanan, Panayiotis Georgiou
In this paper, we exploit the stationary properties of human behavior within an interaction and present a representation learning method to capture behavioral information from speech in an unsupervised way.
no code implementations • 4 Mar 2021 • Nauman Dawalatabad, Jilt Sebastian, Jom Kuriakose, C. Chandra Sekhar, Shrikanth Narayanan, Hema A. Murthy
In this work, we address the problem of separating the percussive voices in the taniavartanam segments of Carnatic music.
no code implementations • 23 Feb 2021 • Nikolaos Flemotomos, Victor R. Martinez, Zhuohao Chen, Torrey A. Creed, David C. Atkins, Shrikanth Narayanan
In this work, we propose a BERT-based model for automatic behavioral scoring of a specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT), where prior work is limited to frequency-based language features and/or short text excerpts which do not capture the unique elements involved in a spontaneous long conversational interaction.
no code implementations • 22 Feb 2021 • Nikolaos Flemotomos, Victor R. Martinez, Zhuohao Chen, Karan Singla, Victor Ardulov, Raghuveer Peri, Derek D. Caperton, James Gibson, Michael J. Tanana, Panayiotis Georgiou, Jake Van Epps, Sarah P. Lord, Tad Hirsch, Zac E. Imel, David C. Atkins, Shrikanth Narayanan
With the growing prevalence of psychological interventions, it is vital to have measures which rate the effectiveness of psychological care to assist in training, supervision, and quality assurance of services.
no code implementations • 19 Feb 2021 • Prashanth Gurunath Shivakumar, Shrikanth Narayanan
A key desiderata for inclusive and accessible speech recognition technology is ensuring its robust performance to children's speech.
1 code implementation • 3 Feb 2021 • Prashanth Gurunath Shivakumar, Panayiotis Georgiou, Shrikanth Narayanan
Confusion2vec, motivated from human speech production and perception, is a word vector representation which encodes ambiguities present in human spoken language in addition to semantics and syntactic information.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 24 Jan 2021 • Tae Jin Park, Naoyuki Kanda, Dimitrios Dimitriadis, Kyu J. Han, Shinji Watanabe, Shrikanth Narayanan
Speaker diarization is a task to label audio or video recordings with classes that correspond to speaker identity, or in short, a task to identify "who spoke when".
no code implementations • 25 Aug 2020 • Krishna Somandepalli, Rajat Hebbar, Shrikanth Narayanan
Our work in this paper focuses on two key aspects of this problem: the lack of domain-specific training or benchmark datasets, and adapting face embeddings learned on web images to long-form content, specifically movies.
no code implementations • 19 Aug 2020 • Victor R. Martinez, Krishna Somandepalli, Karan Singla, Anil Ramanakrishna, Yalda T. Uhls, Shrikanth Narayanan
To date, we are the first to show that language used in movie scripts is a strong indicator of violent content, and that there are systematic portrayals of certain demographics as victims and perpetrators in a large dataset.
1 code implementation • 18 Aug 2020 • Arindam Jati, Chin-Cheng Hsu, Monisankha Pal, Raghuveer Peri, Wael Abd-Almageed, Shrikanth Narayanan
Robust speaker recognition, including in the presence of malicious attacks, is becoming increasingly important and essential, especially due to the proliferation of several smart speakers and personal agents that interact with an individual's voice commands to perform diverse, and even sensitive tasks.
1 code implementation • 31 Jul 2020 • Manoj Kumar, Tae Jin-Park, Somer Bishop, Shrikanth Narayanan
Our experiments illustrate the applicability of meta-learning as a generalized learning paradigm for training deep neural speaker embeddings.
Audio and Speech Processing Sound
no code implementations • 27 Jul 2020 • Mari Ganesh Kumar, Shrikanth Narayanan, Mriganka Sur, Hema A. Murthy
These high dimensional statistics are then projected to a lower dimensional space where the biometric information is preserved.
no code implementations • ACL 2020 • Karan Singla, Zhuohao Chen, David Atkins, Shrikanth Narayanan
Spoken language understanding tasks usually rely on pipelines involving complex processing blocks such as voice activity detection, speaker diarization and Automatic speech recognition (ASR).
no code implementations • 20 May 2020 • Anil Ramakrishna, Shrikanth Narayanan
We then use this parameter at sentence level to estimate the norms.
no code implementations • 15 May 2020 • Zhuohao Chen, Nikolaos Flemotomos, Victor Ardulov, Torrey A. Creed, Zac E. Imel, David C. Atkins, Shrikanth Narayanan
We propose a novel method to augment the word-based features with the utterance level tags for subsequent CBT code estimation.
no code implementations • WS 2020 • Ming-Chang Chiu, Tiantian Feng, Xiang Ren, Shrikanth Narayanan
Toward that goal, in this work, we present a method to evaluate the quality of a screenplay based on linguistic cues.
no code implementations • 12 May 2020 • Krishna Somandepalli, Shrikanth Narayanan
A key objective in multi-view learning is to model the information common to multiple parallel views of a class of objects/events to improve downstream learning tasks.
no code implementations • 6 May 2020 • Anil Ramakrishna, Rahul Gupta, Shrikanth Narayanan
In this work we address this by proposing a generative model for multi-dimensional annotation fusion, which models the dimensions jointly leading to more accurate ground truth estimates.
no code implementations • 13 Apr 2020 • Tae Jin Park, Kyu J. Han, Jing Huang, Xiaodong He, Bo-Wen Zhou, Panayiotis Georgiou, Shrikanth Narayanan
This work presents a novel approach for speaker diarization to leverage lexical information provided by automatic speech recognition.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 18 Mar 2020 • Karel Mundnich, Brandon M. Booth, Michelle L'Hommedieu, Tiantian Feng, Benjamin Girault, Justin L'Hommedieu, Mackenzie Wildman, Sophia Skaaden, Amrutha Nadarajan, Jennifer L. Villatte, Tiago H. Falk, Kristina Lerman, Emilio Ferrara, Shrikanth Narayanan
We designed the study to investigate the use of off-the-shelf wearable and environmental sensors to understand individual-specific constructs such as job performance, interpersonal interaction, and well-being of hospital workers over time in their natural day-to-day job settings.
no code implementations • 16 Mar 2020 • Zhuohao Chen, Singla Karan, David C. Atkins, Zac E. Imel, Shrikanth Narayanan
The DAN-LPE simultaneously trains a domain adversarial net and processes label proportions estimation by the confusion of the source domain and the predictions of the target domain.
no code implementations • 9 Mar 2020 • Rahul Sharma, Krishna Somandepalli, Shrikanth Narayanan
Avoiding the need for manual annotations for active speakers in visual frames, acquiring of which is very expensive, we present a weakly supervised system for the task of localizing active speakers in movie content.
1 code implementation • 5 Mar 2020 • Tae Jin Park, Kyu J. Han, Manoj Kumar, Shrikanth Narayanan
In this study, we propose a new spectral clustering framework that can auto-tune the parameters of the clustering algorithm in the context of speaker diarization.
Ranked #1 on Speaker Diarization on CALLHOME (DER(ig olp) metric)
no code implementations • 21 Nov 2019 • Sandeep Nallan Chakravarthula, Brian Baucom, Shrikanth Narayanan, Panayiotis Georgiou
In this paper, we investigate this link and present an analysis framework that determines appropriate window lengths for the task of behavior estimation.
no code implementations • 16 Nov 2019 • Nazgol Tavabi, Homa Hosseinmardi, Jennifer L. Villatte, Andrés Abeliuk, Shrikanth Narayanan, Emilio Ferrara, Kristina Lerman
Continuous collection of physiological data from wearable sensors enables temporal characterization of individual behaviors.
no code implementations • 10 Nov 2019 • Arindam Jati, Amrutha Nadarajan, Karel Mundnich, Shrikanth Narayanan
In this paper, we address the task of characterizing acoustic scenes in a workplace setting from audio recordings collected with wearable microphones.
no code implementations • 4 Nov 2019 • Haoqi Li, Ming Tu, Jing Huang, Shrikanth Narayanan, Panayiotis Georgiou
In this paper, we propose a machine learning framework to obtain speech emotion representations by limiting the effect of speaker variability in the speech signals.
1 code implementation • 3 Nov 2019 • Raghuveer Peri, Monisankha Pal, Arindam Jati, Krishna Somandepalli, Shrikanth Narayanan
In this paper, we address the problem of speaker recognition in challenging acoustic conditions using a novel method to extract robust speaker-discriminative speech representations.
no code implementations • 25 Oct 2019 • Rimita Lahiri, Manoj Kumar, Somer Bishop, Shrikanth Narayanan
Diagnostic procedures for ASD (autism spectrum disorder) involve semi-naturalistic interactions between the child and a clinician.
no code implementations • 23 Oct 2019 • Prashanth Gurunath Shivakumar, Naveen Kumar, Panayiotis Georgiou, Shrikanth Narayanan
We introduce and analyze different recurrent neural network architectures for incremental and online processing of the ASR transcripts and compare it to the existing offline systems.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +8
1 code implementation • 10 Sep 2019 • Shao-Yen Tseng, Panayiotis Georgiou, Shrikanth Narayanan
Word embeddings such as ELMo have recently been shown to model word semantics with greater efficacy through contextualized learning on large-scale language corpora, resulting in significant improvement in state of the art across many natural language tasks.
no code implementations • 1 Sep 2019 • Vidhyasaharan Sethu, Emily Mower Provost, Julien Epps, Carlos Busso, NIcholas Cummins, Shrikanth Narayanan
A key reason for this is the lack of a common mathematical framework to describe all the relevant elements of emotion representations.
no code implementations • 31 Aug 2019 • Prashanth Gurunath Shivakumar, Shao-Yen Tseng, Panayiotis Georgiou, Shrikanth Narayanan
In this work we derive motivation from psycholinguistics and propose the addition of behavioral information into the context of language modeling.
1 code implementation • 16 Jul 2019 • Kunal Dhawan, Colin Vaz, Ruchir Travadi, Shrikanth Narayanan
We propose an algorithm to extract noise-robust acoustic features from noisy speech.
no code implementations • 1 May 2019 • Victor R. Martinez, Anil Ramakrishna, Ming-Chang Chiu, Karan Singla, Shrikanth Narayanan
In this work, we describe our submission for the 2019 Sentiment, Emotion and Cognitive state (SEC) pilot task of the LORELEI project.
no code implementations • 12 Apr 2019 • Md Nasir, Sandeep Nallan Chakravarthula, Brian Baucom, David C. Atkins, Panayiotis Georgiou, Shrikanth Narayanan
We find that our proposed measure is correlated with the therapist's empathy towards their patient in Motivational Interviewing and with affective behaviors in Couples Therapy.
1 code implementation • 3 Apr 2019 • Krishna Somandepalli, Naveen Kumar, Ruchir Travadi, Shrikanth Narayanan
We propose Deep Multiset Canonical Correlation Analysis (dMCCA) as an extension to representation learning using CCA when the underlying signal is observed across multiple (more than two) modalities.
no code implementations • 2 Apr 2019 • Karel Mundnich, Brandon M. Booth, Benjamin Girault, Shrikanth Narayanan
In this work, we propose a novel annotation approach using triplet embeddings.
no code implementations • 26 Mar 2019 • Taruna Agrawal, Rahul Gupta, Shrikanth Narayanan
Convolutional Neural Networks (CNNs) have revolutionized performances in several machine learning tasks such as image classification, object tracking, and keyword spotting.
no code implementations • 29 Oct 2018 • James Gibson, David C. Atkins, Torrey Creed, Zac Imel, Panayiotis Georgiou, Shrikanth Narayanan
We propose a methodology for estimating human behaviors in psychotherapy sessions using mutli-label and multi-task learning paradigms.
no code implementations • 31 Aug 2018 • Homa Hosseinmardi, Amir Ghasemian, Shrikanth Narayanan, Kristina Lerman, Emilio Ferrara
Today's densely instrumented world offers tremendous opportunities for continuous acquisition and analysis of multimodal sensor data providing temporal characterization of an individual's behaviors.
no code implementations • ACL 2018 • Karan Singla, Dogan Can, Shrikanth Narayanan
We present a novel multi-task modeling approach to learning multilingual distributed representations of text.
Cross-Lingual Document Classification Document Classification +5
no code implementations • 8 Jun 2018 • Victor Ardulov, Manoj Kumar, Shanna Williams, Thomas Lyon, Shrikanth Narayanan
Child Forensic Interviewing (FI) presents a challenge for effective information retrieval and decision making.
no code implementations • 7 Jun 2018 • Rahul Gupta, Saurabh Sahu, Carol Espy-Wilson, Shrikanth Narayanan
Sentiment classification involves quantifying the affective reaction of a human to a document, media item or an event.
no code implementations • 23 Apr 2018 • Md Nasir, Brian Baucom, Shrikanth Narayanan, Panayiotis Georgiou
Entrainment is a known adaptation mechanism that causes interaction participants to adapt or synchronize their acoustic characteristics.
3 code implementations • SEMEVAL 2018 • Christos Baziotis, Nikos Athanasiou, Alexandra Chronopoulou, Athanasia Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, Shrikanth Narayanan, Alexandros Potamianos
In this paper we present deep-learning models that submitted to the SemEval-2018 Task~1 competition: "Affect in Tweets".
no code implementations • SEMEVAL 2017 • Athanasia Kolovou, Filippos Kokkinos, Aris Fergadis, Pinelopi Papalampidi, Elias Iosif, Mal, Nikolaos rakis, Elisavet Palogiannidi, Haris Papageorgiou, Shrikanth Narayanan, Alex Potamianos, ros
In this paper, we describe our submission to SemEval2017 Task 4: Sentiment Analysis in Twitter.
no code implementations • ACL 2017 • Anil Ramakrishna, Victor R. Mart{\'\i}nez, Mal, Nikolaos rakis, Karan Singla, Shrikanth Narayanan
We examine differences in portrayal of characters in movies using psycholinguistic and graph theoretic measures computed directly from screenplays.
no code implementations • 13 Dec 2016 • Rahul Gupta, Shrikanth Narayanan
In this work, we propose Expectation-Maximization (EM) based algorithms that rely on the judgments from multiple annotators and the object attributes for inferring the latent ground truth.
no code implementations • SEMEVAL 2016 • Elisavet Palogiannidi, Athanasia Kolovou, Fenia Christopoulou, Filippos Kokkinos, Elias Iosif, Mal, Nikolaos rakis, Haris Papageorgiou, Shrikanth Narayanan, Alex Potamianos, ros
no code implementations • LREC 2012 • Priti Aggarwal, Ron artstein, Jillian Gerten, Athanasios Katsamanis, Shrikanth Narayanan, Angela Nazarian, David Traum
In addition to speech recordings, the corpus contains the outputs of speech recognition performed at the time of utterance as well as the system interpretation of the utterances.