no code implementations • 17 May 2018 • Preethi Raghavan, Siddharth Patwardhan, Jennifer J. Liang, Murthy V. Devarakonda
Over the course of 11 months, 11 medical students followed our annotation methodology, resulting in a question answering dataset of 5696 questions over 71 patient records, of which 1747 questions have corresponding answers generated by the medical students.
no code implementations • NAACL 2016 • Oren Melamud, David McClosky, Siddharth Patwardhan, Mohit Bansal
We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks.
no code implementations • ACL 2016 • Chaitanya Shivade, Preethi Raghavan, Siddharth Patwardhan
We seek to address the lack of labeled data (and high cost of annotation) for textual entailment in some domains.
1 code implementation • 16 Jun 2019 • Siddharth Patwardhan
We propose a novel measure to quantify dismantlement of a fragmented network.
Physics and Society Social and Information Networks
no code implementations • 14 Jan 2022 • Huiting Liu, Avinesh P. V. S., Siddharth Patwardhan, Peter Grasch, Sachin Agarwal
For this study, we propose a methodology for the assessment of model stability (which we refer to as jitter under various experimental conditions.
no code implementations • 9 Feb 2022 • Jiawen Zhang, Abhijit Mishra, Avinesh P. V. S, Siddharth Patwardhan, Sachin Agarwal
In this work, we propose a potentially data-efficient approach that reuses existing systems for (a) image analysis, (b) question rewriting, and (c) text-based question answering to answer such visual questions.
no code implementations • 4 Dec 2022 • Benjamin Muller, Deepanshu Gupta, Siddharth Patwardhan, Jean-Philippe Fauconnier, David Vandyke, Sachin Agarwal
For a given language, we are able to predict zero-shot performance, that increases on a logarithmic scale with the number of few-shot target language data points.
no code implementations • 16 Dec 2022 • Siddharth Patwardhan, Utso Majumder, Aditya Das Sarma, Mayukha Pal, Divyanshi Dwivedi, Prasanta K. Panigrahi
The percolation threshold is an important measure to determine the inherent rigidity of large networks.
no code implementations • 31 Oct 2023 • Gabrielle Cohn, Rishika Agarwal, Deepanshu Gupta, Siddharth Patwardhan
We introduce EELBERT, an approach for compression of transformer-based models (e. g., BERT), with minimal impact on the accuracy of downstream tasks.