Search Results for author: David Nahamoo

Found 5 papers, 0 papers with code

CNNBiF: CNN-based Bigram Features for Named Entity Recognition

no code implementations Findings (EMNLP) 2021 Chul Sung, Vaibhava Goel, Etienne Marcheret, Steven Rennie, David Nahamoo

More importantly our fine-tuned CoNLL2003 model displays significant gains in generalization to out of domain datasets: on the OntoNotes subset we achieve an F1 of 72. 67 which is 0. 49 points absolute better than the baseline, and on the WNUT16 set an F1 of 68. 22 which is a gain of 0. 48 points.

named-entity-recognition Named Entity Recognition

Unsupervised Adaptation of Question Answering Systems via Generative Self-training

no code implementations EMNLP 2020 Steven Rennie, Etienne Marcheret, Neil Mallinar, David Nahamoo, Vaibhava Goel

Nevertheless, additional pre-training closer to the end-task, such as training on synthetic QA pairs, has been shown to improve performance.

Question Answering

Quantized-Dialog Language Model for Goal-Oriented Conversational Systems

no code implementations26 Dec 2018 R. Chulaka Gunasekara, David Nahamoo, Lazaros C. Polymenakos, Jatin Ganhotra, Kshitij P. Fadnis

The key idea is to quantize the dialog space into clusters and create a language model across the clusters, thus allowing for an accurate choice of the next utterance in the conversation.

Dialog Learning Goal-Oriented Dialog +1

Direct Acoustics-to-Word Models for English Conversational Speech Recognition

no code implementations22 Mar 2017 Kartik Audhkhasi, Bhuvana Ramabhadran, George Saon, Michael Picheny, David Nahamoo

Our CTC word model achieves a word error rate of 13. 0%/18. 8% on the Hub5-2000 Switchboard/CallHome test sets without any LM or decoder compared with 9. 6%/16. 0% for phone-based CTC with a 4-gram LM.

Automatic Speech Recognition English Conversational Speech Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.