Search Results for author: Vimal Manohar

Found 12 papers, 0 papers with code

Self-Supervised Representations for Singing Voice Conversion

no code implementations21 Mar 2023 Tejas Jayashankar, JiLong Wu, Leda Sari, David Kant, Vimal Manohar, Qing He

A singing voice conversion model converts a song in the voice of an arbitrary source singer to the voice of a target singer.

Disentanglement Voice Conversion

Voice-preserving Zero-shot Multiple Accent Conversion

no code implementations23 Nov 2022 Mumin Jin, Prashant Serai, JiLong Wu, Andros Tjandra, Vimal Manohar, Qing He

Most people who have tried to learn a foreign language would have experienced difficulties understanding or speaking with a native speaker's accent.

Automatic Speech Recognition and Topic Identification for Almost-Zero-Resource Languages

no code implementations23 Feb 2018 Matthew Wiesner, Chunxi Liu, Lucas Ondel, Craig Harman, Vimal Manohar, Jan Trmal, Zhongqiang Huang, Najim Dehak, Sanjeev Khudanpur

Automatic speech recognition (ASR) systems often need to be developed for extremely low-resource languages to serve end-uses such as audio content categorization and search.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Using of heterogeneous corpora for training of an ASR system

no code implementations1 Jun 2017 Jan Trmal, Gaurav Kumar, Vimal Manohar, Sanjeev Khudanpur, Matt Post, Paul McNamee

The paper summarizes the development of the LVCSR system built as a part of the Pashto speech-translation system at the SCALE (Summer Camp for Applied Language Exploration) 2015 workshop on "Speech-to-text-translation for low-resource languages".

speech-recognition Speech Recognition +2

Purely sequence-trained neural networks for ASR based on lattice-free MMI

no code implementations INTERSPEECH 2016 2016 Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahrmani, Vimal Manohar, Xingyu Na, Yiming Wang, Sanjeev Khudanpur

Models trained with LFMMI provide a relative word error rate reduction of ∼11. 5%, over those trained with cross-entropy objective function, and ∼8%, over those trained with cross-entropy and sMBR objective functions.

Language Modelling Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.