no code implementations • 7 Mar 2024 • Ojas Gramopadhye, Saeel Sandeep Nachane, Prateek Chanda, Ganesh Ramakrishnan, Kshitij Sharad Jadhav, Yatin Nandwani, Dinesh Raghu, Sachindra Joshi
In this paper, we propose a modified version of the MedQA-USMLE dataset, which is subjective, to mimic real-life clinical scenarios.
no code implementations • 4 Feb 2024 • Gaurav Pandey, Yatin Nandwani, Tahira Naseem, Mayank Mishra, Guangxuan Xu, Dinesh Raghu, Sachindra Joshi, Asim Munawar, Ramón Fernandez Astudillo
Following the success of Proximal Policy Optimization (PPO) for Reinforcement Learning from Human Feedback (RLHF), new techniques such as Sequence Likelihood Calibration (SLiC) and Direct Policy Optimization (DPO) have been proposed that are offline in nature and use rewards in an indirect manner.
1 code implementation • 20 May 2023 • Yatin Nandwani, Vineet Kumar, Dinesh Raghu, Sachindra Joshi, Luis A. Lastras
PMI quantifies the extent to which the document influences the generated response -- with a higher PMI indicating a more faithful response.
1 code implementation • 17 Oct 2022 • Yatin Nandwani, Rishabh Ranjan, Mausam, Parag Singla
Experiments on several problems, both perceptual as well as symbolic, which require learning the constraints of an ILP, show that our approach has superior performance and scales much better compared to purely neural baselines and other state-of-the-art models that require solver-based training.
1 code implementation • 24 Feb 2022 • Kevin Leyton-Brown, Mausam, Yatin Nandwani, Hedayat Zarkoob, Chris Cameron, Neil Newman, Dinesh Raghu
Peer-reviewed conferences, the main publication venues in CS, rely critically on matching highly qualified reviewers for each paper.
no code implementations • ICLR 2022 • Yatin Nandwani, Vidit Jain, Mausam, Parag Singla
One drawback of the proposed architectures, which are often based on Graph Neural Networks (GNN), is that they cannot generalize across the size of the output space from which variables are assigned a value, for example, set of colors in a GCP, or board-size in sudoku.
no code implementations • 18 Apr 2021 • Keshav Kolluru, Mayank Singh Chauhan, Yatin Nandwani, Parag Singla, Mausam
Pre-trained language models (LMs) like BERT have shown to store factual knowledge about the world.
no code implementations • ICLR 2021 • Yatin Nandwani, Deepanshu Jindal, Mausam, Parag Singla
Our framework uses a selection module, whose goal is to dynamically determine, for every input, the solution that is most effective for training the network parameters in any given learning iteration.
1 code implementation • AKBC 2020 • Yatin Nandwani, Ankesh Gupta, Aman Agrawal, Mayank Singh Chauhan, Parag Singla, Mausam
State-of-the-art models for Knowledge Base Completion (KBC) are based on tensor factorization (TF), e. g, DistMult, ComplEx.
1 code implementation • NeurIPS 2019 • Yatin Nandwani, Abhishek Pathak, Mausam, Parag Singla
In this paper, we present a constrained optimization formulation for training a deep network with a given set of hard constraints on output labels.