no code implementations • 31 Dec 2023 • Shreyas Verma, Manoj Parmar, Palash Choudhary, Sanchita Porwal
Answering questions using pre-trained language models (LMs) and knowledge graphs (KGs) presents challenges in identifying relevant knowledge and performing joint reasoning. We compared LMs (fine-tuned for the task) with the previously published QAGNN method for the Question-answering (QA) objective and further measured the impact of additional factual context on the QAGNN performance.
no code implementations • 25 Dec 2023 • Shreyas Verma, Kien Tran, Yusuf Ali, Guangyu Min
ENNs are small networks attached to large, frozen models to improve the model's joint distributions and uncertainty estimates.
1 code implementation • 27 Oct 2023 • Himanshu Gupta, Kevin Scaria, Ujjwala Anantheswaran, Shreyas Verma, Mihir Parmar, Saurabh Arjun Sawant, Chitta Baral, Swaroop Mishra
Finally, when pre-finetuned on our synthetic SuperGLUE dataset, T5-3B yields impressive results on the OpenLLM leaderboard, surpassing the model trained on the Self-Instruct dataset by 4. 14% points.
1 code implementation • 16 Sep 2021 • Himanshu Gupta, Shreyas Verma, Santosh Mashetty, Swaroop Mishra
In this paper, we introduce CONTEXT-NER, a task that aims to generate the relevant context for entities in a sentence, where the context is a phrase describing the entity but not necessarily present in the sentence.
Ranked #1 on ContextNER on EDGAR10-Q Dataset