no code implementations • 31 Jul 2023 • Rimita Lahiri, Tiantian Feng, Rajat Hebbar, Catherine Lord, So Hyun Kim, Shrikanth Narayanan
We address the problem of detecting who spoke when in child-inclusive spoken interactions i. e., automatic child-adult speaker classification.
no code implementations • 23 May 2023 • Anfeng Xu, Rajat Hebbar, Rimita Lahiri, Tiantian Feng, Lindsay Butler, Lue Shen, Helen Tager-Flusberg, Shrikanth Narayanan
This paper proposes applications of speech processing technologies in support of automated assessment of children's spoken language development by classification between child and adult speech and between speech and nonverbal vocalization in NLS, with respective F1 macro scores of 82. 6% and 67. 8%, underscoring the potential for accurate and scalable tools for ASD research and clinical use.
no code implementations • 7 Nov 2022 • Rimita Lahiri, Md Nasir, Catherine Lord, So Hyun Kim, Shrikanth Narayanan
Vocal entrainment is a social adaptation mechanism in human interaction, knowledge of which can offer useful insights to an individual's cognitive-behavioral characteristics.
no code implementations • 15 Oct 2021 • Rimita Lahiri, Kenichi Kumatani, Eric Sun, Yao Qian
Multilingual end-to-end(E2E) models have shown a great potential in the expansion of the language coverage in the realm of automatic speech recognition(ASR).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 13 Oct 2021 • Digbalay Bose, Krishna Somandepalli, Souvik Kundu, Rimita Lahiri, Jonathan Gratch, Shrikanth Narayanan
Computational modeling of the emotions evoked by art in humans is a challenging problem because of the subjective and nuanced nature of art and affective signals.
no code implementations • 25 Oct 2019 • Rimita Lahiri, Manoj Kumar, Somer Bishop, Shrikanth Narayanan
Diagnostic procedures for ASD (autism spectrum disorder) involve semi-naturalistic interactions between the child and a clinician.