no code implementations • 5 Jan 2022 • Avi Chawla
Reading Comprehension (RC) is a task of answering a question from a given passage or a set of passages.
no code implementations • 30 Nov 2021 • Avi Chawla, Nidhi Mulay, Vikas Bishnoi, Gaurav Dhama
The inception of modeling contextual information using models such as BERT, ELMo, and Flair has significantly improved representation learning for words.
no code implementations • 30 Nov 2021 • Avi Chawla, Nidhi Mulay, Vikas Bishnoi, Gaurav Dhama, Dr. Anil Kumar Singh
Despite this progress, the NLP community has not witnessed any significant work performing a comparative study on the contextualization power of such architectures.
1 code implementation • 23 Sep 2019 • Gregor Wiedemann, Steffen Remus, Avi Chawla, Chris Biemann
Since vectors of the same word type can vary depending on the respective context, they implicitly provide a model for word sense disambiguation (WSD).
1 code implementation • WS 2018 • Shreyansh Singh, Avi Chawla, Ayush Sharma, Anil Kumar Singh
This paper describes our submission system for the Shallow Track of Surface Realization Shared Task 2018 (SRST'18).