no code implementations • ACL 2020 • P, Vinay ramish, Dipti Misra Sharma
After training a Neural Machine Translation (NMT) baseline system, it has been observed that these iteration outputs have an oracle score higher than baseline up to 1. 01 BLEU points compared to the last iteration of the trained system. We come up with a ranking mechanism by solely focusing on the decoder{'}s ability to generate distinct tokens and without the usage of any language model or data.
no code implementations • SEMEVAL 2019 • Adithya Avvaru, P, Anupam ey
The strengths of the scalable gradient tree boosting algorithm, XGBoost and distributed sentence encoder, Skip-Thought Vectors are not explored yet by the cQA research community.
no code implementations • 19 Jan 2019 • M. R, Akram, C. P, Singhabahu, M. S. M Saad, P, Deleepa, Anupiya, Nugaliyadde, Yashas, Mallawarachchi
The paper presents an approach to build a question and answer system that is capable of processing the information in a large dataset and allows the user to gain knowledge from this dataset by asking questions in natural language form.
no code implementations • WS 2018 • Abdul Khan, P, Subhadarshi a, Jia Xu, Lampros Flokas
Furthermore, we applied ensemble learning on training models of intermediate epochs and achieved an improvement of 4. 02 BLEU points over the baseline.
1 code implementation • ACL 2018 • Akshay Chaturvedi, P, Onkar it, Utpal Garain
The task of Question Answering is at the very core of machine comprehension.
no code implementations • ACL 2018 • P, Gaurav ey, Danish Contractor, Vineet Kumar, Sachindra Joshi
In this paper we present the Exemplar Encoder-Decoder network (EED), a novel conversation model that learns to utilize \textit{similar} examples from training data to generate responses.
no code implementations • WS 2018 • Julia Parish-Morris, Evangelos Sariyanidi, Casey Zampella, G. Keith Bartley, Emily Ferguson, Ashley A. Pallathra, Leila Bateman, Samantha Plate, Meredith Cola, P, Juhi ey, Edward S. Brodkin, Robert T. Schultz, Birkan Tun{\c{c}}
Computational studies of language in ASD provide support for the existence of an underlying dimension of restriction that emerges during a conversation.
no code implementations • LREC 2018 • P, Ayushi ey, Brij Mohan Lal Srivastava, Rohit Kumar, Bhanu Teja Nellore, Kasi Sai Teja, Suryakanth V. Gangashetty
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • IJCNLP 2017 • P, Prakhar ey, Vikram Pudi, Manish Shrivastava
Word embeddings learned from text corpus can be improved by injecting knowledge from external resources, while at the same time also specializing them for similarity or relatedness.
no code implementations • ACL 2017 • Abhisek Chakrabarty, P, Onkar Arun it, Utpal Garain
It is found that except Bengali, the proposed method outperforms Lemming and Morfette on the other languages.
no code implementations • EACL 2017 • P, Harshit e
We present a novel, unsupervised, and distance measure agnostic method for search space reduction in spell correction using neural character embeddings.
no code implementations • WS 2016 • Pitambar Behera, Neha Mourya, P, V ey, ana
Furthermore, so far as the methodology is concerned, we have adhered to the Dorr{'}s Lexical Conceptual Structure for the resolution of divergences.