no code implementations • 30 Mar 2024 • Parag Pravin Dakle, Alolika Gon, Sihan Zha, Liang Wang, SaiKrishna Rallabandi, Preethi Raghavan
For the impact type classification task, our XLM-RoBERTa model fine-tuned using a custom fine-tuning strategy ranked first for the English language.
1 code implementation • 27 Feb 2024 • Parker Glenn, Parag Pravin Dakle, Liang Wang, Preethi Raghavan
Many existing end-to-end systems for hybrid question answering tasks can often be boiled down to a "prompt-and-pray" paradigm, where the user has limited control and insight into the intermediate reasoning steps used to achieve the final result.
no code implementations • 2 Dec 2023 • Syed-Amad Hussain, Parag Pravin Dakle, SaiKrishna Rallabandi, Preethi Raghavan
This study delves into the capabilities and limitations of Large Language Models (LLMs) in the challenging domain of conditional question-answering.
no code implementations • 15 Sep 2023 • Haochen Liu, Sai Krishna Rallabandi, Yijing Wu, Parag Pravin Dakle, Preethi Raghavan
Self-training has recently emerged as an economical and efficient technique for developing sentiment analysis models by leveraging a small amount of labeled data and a large amount of unlabeled data.
1 code implementation • 31 May 2023 • Parker Glenn, Parag Pravin Dakle, Preethi Raghavan
In addressing the task of converting natural language to SQL queries, there are several semantic and syntactic challenges.
1 code implementation • 26 Apr 2023 • Yijing Wu, SaiKrishna Rallabandi, Ravisutha Srinivasamurthy, Parag Pravin Dakle, Alolika Gon, Preethi Raghavan
Spoken question answering (SQA) systems are critical for digital assistants and other real-world use cases, but evaluating their performance is a challenge due to the importance of human-spoken questions.
no code implementations • 27 Nov 2022 • Parag Pravin Dakle, SaiKrishna Rallabandi, Preethi Raghavan
We view the landscape of large language models (LLMs) through the lens of the recently released BLOOM model to understand the performance of BLOOM and other decoder-only LLMs compared to BERT-style encoder-only models.
1 code implementation • COLING 2020 • Parag Pravin Dakle, Dan I. Moldovan
We present the first large scale corpus for entity resolution in email conversations (CEREC).
1 code implementation • LREC 2020 • Takshak Desai, Parag Pravin Dakle, Dan Moldovan
This paper describes an accurate framework for carrying out multi-lingual discourse segmentation with BERT (Devlin et al., 2019).
no code implementations • LREC 2020 • Parag Pravin Dakle, Takshak Desai, Dan Moldovan
This paper investigates the problem of entity resolution for email conversations and presents a seed annotated corpus of email threads labeled with entity coreference chains.