no code implementations • EMNLP (WNUT) 2020 • Luca Molteni, Mittul Singh, Juho Leinonen, Katri Leino, Mikko Kurimo, Emanuele Della Valle
In this article, we compare two crowdsourcing sources on a dialogue paraphrasing task revolving around a chatbot service.
no code implementations • 28 Oct 2022 • Tamás Grósz, Mittul Singh, Sudarsana Reddy Kadiri, Hemant Kathania, Mikko Kurimo
The current state-of-the-art methods proposed for these tasks are ensembles based on deep neural networks like ResNets in conjunction with feature engineering.
1 code implementation • 29 Aug 2020 • Hemant Kathania, Mittul Singh, Tamás Grósz, Mikko Kurimo
Firstly, we apply the prosody-based data augmentation to supplement the audio data.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
1 code implementation • 19 Aug 2020 • Katri Leino, Juho Leinonen, Mittul Singh, Sami Virpioja, Mikko Kurimo
Using this corpus, we also construct a retrieval-based evaluation task for Finnish chatbot development.
no code implementations • 6 Aug 2020 • Tamás Grósz, Mittul Singh, Sudarsana Reddy Kadiri, Hemant Kathania, Mikko Kurimo
On ComParE 2020 tasks, we investigate applying an ensemble of E2E models for robust performance and developing task-specific modifications for each task.
no code implementations • LREC 2020 • Mittul Singh, Peter Smit, Sami Virpioja, Mikko Kurimo
We, however, show that for character-based NNLMs, only pretraining with a related language improves the ASR performance, and using an unrelated language may deteriorate it.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
1 code implementation • 28 May 2020 • Mittul Singh, Sami Virpioja, Peter Smit, Mikko Kurimo
On these tasks, interpolating the baseline RNNLM approximation and a conventional LM outperforms the conventional LM in terms of the Maximum Term Weighted Value for single-character subwords.
no code implementations • NAACL 2019 • Debjit Paul, Mittul Singh, Michael A. Hedderich, Dietrich Klakow
In our experiments on Chunking and NER, this approach performs more robustly than the baselines.
no code implementations • EMNLP 2016 • Youssef Oualil, Mittul Singh, Clayton Greenberg, Dietrich Klakow
The goal of language modeling techniques is to capture the statistical and structural properties of natural languages from training corpora.
no code implementations • 23 Mar 2017 • Youssef Oualil, Clayton Greenberg, Mittul Singh, Dietrich Klakow
Feedforward Neural Network (FNN)-based language models estimate the probability of the next word based on the history of the last N words, whereas Recurrent Neural Networks (RNN) perform the same task based only on the last word and some context information that cycles in the network.
no code implementations • COLING 2016 • Mittul Singh, Clayton Greenberg, Youssef Oualil, Dietrich Klakow
We augmented pre-trained word embeddings with these novel embeddings and evaluated on a rare word similarity task, obtaining up to 3 times improvement in correlation over the original set of embeddings.
no code implementations • 6 Jan 2014 • Benjamin Roth, Tassilo Barth, Michael Wiegand, Mittul Singh, Dietrich Klakow
In the TAC KBP 2013 English Slotfilling evaluation, the submitted main run of the LSV RelationFactory system achieved the top-ranked F1-score of 37. 3%.