1 code implementation • ACL 2021 • Karthik Ganesan, Pakhi Bamdev, Jaivarsan B, Amresh Venugopal, Abhinav Tushar
In our work, we test our hypothesis by using concatenated N-best ASR alternatives as the input to transformer encoder models, namely BERT and XLM-RoBERTa, and achieve performance equivalent to the prior state-of-the-art model on DSTC2 dataset.
no code implementations • 5 Nov 2015 • Abhinav Tushar, Abhinav Dahiya
The development of plot or story in novels is reflected in the content and the words used.
no code implementations • 3 May 2015 • Abhinav Tushar
This paper proposes an architecture for deep neural networks with hidden layer branches that learn targets of lower hierarchy than final layer targets.