1 code implementation • 27 Jun 2024 • Ritam Dutt, Zhen Wu, Kelly Shi, Divyanshu Sheth, Prakhar Gupta, Carolyn Penstein Rose
We present a generalizable classification approach that leverages Large Language Models (LLMs) to facilitate the detection of implicitly encoded social meaning in conversations.
no code implementations • 19 Feb 2024 • Kundan Krishna, Sanjana Ramprasad, Prakhar Gupta, Byron C. Wallace, Zachary C. Lipton, Jeffrey P. Bigham
We present GenAudit -- a tool intended to assist fact-checking LLM responses for document-grounded tasks.
1 code implementation • 23 May 2023 • Kundan Krishna, Prakhar Gupta, Sanjana Ramprasad, Byron C. Wallace, Jeffrey P. Bigham, Zachary C. Lipton
While the NLP community has produced numerous summarization benchmarks, none provide the rich annotations required to simultaneously address many important problems related to control and reliability.
3 code implementations • NeurIPS 2023 • Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, Peter Clark
Motivated by how humans refine their written text, we introduce Self-Refine, an approach for improving initial outputs from LLMs through iterative feedback and refinement.
no code implementations • 2 Feb 2023 • Nicholas Meade, Spandana Gella, Devamanyu Hazarika, Prakhar Gupta, Di Jin, Siva Reddy, Yang Liu, Dilek Hakkani-Tür
For instance, using automatic evaluation, we find our best fine-tuned baseline only generates safe responses to unsafe dialogue contexts from DiaSafety 4. 04% more than our approach.
no code implementations • 27 Jan 2023 • Jessica Huynh, Cathy Jiao, Prakhar Gupta, Shikib Mehri, Payal Bajaj, Vishrav Chaudhary, Maxine Eskenazi
The paper shows that the choice of datasets used for training a model contributes to how well it performs on a task as well as on how the prompt should be structured.
1 code implementation • 20 Dec 2022 • Prakhar Gupta, Yang Liu, Di Jin, Behnam Hedayatnia, Spandana Gella, Sijia Liu, Patrick Lange, Julia Hirschberg, Dilek Hakkani-Tur
These guidelines provide information about the context they are applicable to and what should be included in the response, allowing the models to generate responses that are more closely aligned with the developer's expectations and intent.
1 code implementation • 25 May 2022 • Prakhar Gupta, Cathy Jiao, Yi-Ting Yeh, Shikib Mehri, Maxine Eskenazi, Jeffrey P. Bigham
We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets.
no code implementations • Findings (NAACL) 2022 • Prakhar Gupta, Harsh Jhamtani, Jeffrey P. Bigham
Target-guided response generation enables dialogue systems to smoothly transition a conversation from a dialogue context toward a target sentence.
2 code implementations • ACL 2022 • Prakhar Gupta, Chien-Sheng Wu, Wenhao Liu, Caiming Xiong
Fact-checking is an essential tool to mitigate the spread of misinformation and disinformation.
1 code implementation • Findings (ACL) 2021 • Prakhar Gupta, Yulia Tsvetkov, Jeffrey P. Bigham
Experiments on classification, ranking and evaluation tasks across multiple datasets demonstrate that our approaches outperform strong baselines in providing informative negative examples for training dialogue systems.
1 code implementation • ACL 2021 • Prakhar Gupta, Martin Jaggi
The advent of contextual word embeddings -- representations of words which incorporate semantic and syntactic information from their context -- has led to tremendous improvements on a wide variety of NLP tasks.
1 code implementation • ACL 2021 • Zhuoyuan Mao, Prakhar Gupta, Pei Wang, Chenhui Chu, Martin Jaggi, Sadao Kurohashi
Large-scale models for learning fixed-dimensional cross-lingual sentence representations like LASER (Artetxe and Schwenk, 2019b) lead to significant improvement in performance on downstream tasks.
1 code implementation • NAACL 2021 • Prakhar Gupta, Jeffrey P. Bigham, Yulia Tsvetkov, Amy Pavel
Dialogue systems pretrained with large language models generate locally coherent responses, but lack the fine-grained control over responses necessary to achieve specific goals.
no code implementations • 2 Mar 2020 • Gaurav Verma, Vishwa Vinay, Sahil Bansal, Shashank Oberoi, Makkunda Sharma, Prakhar Gupta
Interactive search sessions often contain multiple queries, where the user submits a reformulated version of the previous query in response to the original results.
2 code implementations • 28 Dec 2019 • Ali Sabet, Prakhar Gupta, Jean-Baptiste Cordonnier, Robert West, Martin Jaggi
Recent advances in cross-lingual word embeddings have primarily relied on mapping-based methods, which project pretrained word embeddings from different languages into a shared space through a linear transformation.
Cross-Lingual Document Classification Cross-Lingual Word Embeddings +8
2 code implementations • WS 2019 • Prakhar Gupta, Shikib Mehri, Tiancheng Zhao, Amy Pavel, Maxine Eskenazi, Jeffrey P. Bigham
The aim of this paper is to mitigate the shortcomings of automatic evaluation of open-domain dialog systems through multi-reference evaluation.
2 code implementations • WS 2019 • Prakhar Gupta, Vinayshekhar Bannihatti Kumar, Mukul Bhutani, Alan W. black
In this paper, we propose models which generate more diverse and interesting outputs by 1) training models to focus attention on important keyphrases of the story, and 2) promoting generation of non-generic words.
1 code implementation • NAACL 2019 • Prakhar Gupta, Matteo Pagliardini, Martin Jaggi
Pre-trained word vectors are ubiquitous in Natural Language Processing applications.
no code implementations • 1 Nov 2018 • Prakhar Gupta, Gaurush Hiranandani, Harvineet Singh, Branislav Kveton, Zheng Wen, Iftikhar Ahamath Burhanuddin
We assume that the user examines the list of recommended items until the user is attracted by an item, which is clicked, and does not examine the rest of the items.
2 code implementations • LREC 2018 • Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, Tomas Mikolov
Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance.
Ranked #12 on Only Connect Walls Dataset Task 1 (Grouping) on OCW (using extra training data)
no code implementations • 10 Nov 2017 • Prakhar Gupta, Shubh Gupta, Ajaykrishnan Jayagopal, Sourav Pal, Ritwik Sinha
However, given the difference in what constitutes a mobile interface, and the usage context of these devices, we postulate that saliency prediction for mobile interface images requires a fresh approach.
5 code implementations • NAACL 2018 • Matteo Pagliardini, Prakhar Gupta, Martin Jaggi
The recent tremendous success of unsupervised word embeddings in a multitude of applications raises the obvious question if similar methods could be derived to improve embeddings (i. e. semantic representations) of word sequences as well.