Search Results for author: Sarthak Jain

Found 20 papers, 11 papers with code

From Instructions to Constraints: Language Model Alignment with Automatic Constraint Verification

no code implementations10 Mar 2024 Fei Wang, Chao Shang, Sarthak Jain, Shuai Wang, Qiang Ning, Bonan Min, Vittorio Castelli, Yassine Benajiba, Dan Roth

We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments, and propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints.

Abstractive Text Summarization Entity Typing +2

Game-theoretic Counterfactual Explanation for Graph Neural Networks

no code implementations8 Feb 2024 Chirag Chhablani, Sarthak Jain, Akshay Channesh, Ian A. Kash, Sourav Medya

Our results reveals that computing Banzhaf values requires lower sample complexity in identifying the counterfactual explanations compared to other popular methods such as computing Shapley values.

counterfactual Counterfactual Explanation +2

A deep learning pipeline for cross-sectional and longitudinal multiview data integration

1 code implementation2 Dec 2023 Sarthak Jain, Sandra E. Safo

Biomedical research now commonly integrates diverse data types or views from the same individuals to better understand the pathobiology of complex diseases, but the challenge lies in meaningfully integrating these diverse views.

Data Integration Variable Selection

How Many and Which Training Points Would Need to be Removed to Flip this Prediction?

1 code implementation4 Feb 2023 Jinghan Yang, Sarthak Jain, Byron C. Wallace

We consider the problem of identifying a minimal subset of training data $\mathcal{S}_t$ such that if the instances comprising $\mathcal{S}_t$ had been removed prior to training, the categorization of a given test point $x_t$ would have been different.

text-classification Text Classification

Influence Functions for Sequence Tagging Models

1 code implementation25 Oct 2022 Sarthak Jain, Varun Manjunatha, Byron C. Wallace, Ani Nenkova

We show the practical utility of segment influence by using the method to identify systematic annotation errors in two named entity recognition corpora.

named-entity-recognition Named Entity Recognition +3

Modular Self-Supervision for Document-Level Relation Extraction

no code implementations EMNLP 2021 Sheng Zhang, Cliff Wong, Naoto Usuyama, Sarthak Jain, Tristan Naumann, Hoifung Poon

Extracting relations across large text spans has been relatively underexplored in NLP, but it is particularly important for high-value domains such as biomedicine, where obtaining high recall of the latest findings is crucial for practical applications.

Document-level Relation Extraction Reading Comprehension +1

Combining Feature and Instance Attribution to Detect Artifacts

no code implementations Findings (ACL) 2022 Pouya Pezeshkpour, Sarthak Jain, Sameer Singh, Byron C. Wallace

In this paper we evaluate use of different attribution methods for aiding identification of training data artifacts.

Does BERT Pretrained on Clinical Notes Reveal Sensitive Data?

4 code implementations NAACL 2021 Eric Lehman, Sarthak Jain, Karl Pichotta, Yoav Goldberg, Byron C. Wallace

The cost of training such models (and the necessity of data access to do so) coupled with their utility motivates parameter sharing, i. e., the release of pretrained models such as ClinicalBERT.

An Empirical Comparison of Instance Attribution Methods for NLP

1 code implementation NAACL 2021 Pouya Pezeshkpour, Sarthak Jain, Byron C. Wallace, Sameer Singh

Instance attribution methods constitute one means of accomplishing these goals by retrieving training instances that (may have) led to a particular prediction.

Retrieval

SciREX: A Challenge Dataset for Document-Level Information Extraction

1 code implementation ACL 2020 Sarthak Jain, Madeleine van Zuylen, Hannaneh Hajishirzi, Iz Beltagy

It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections.

Sentence

Learning to Faithfully Rationalize by Construction

2 code implementations ACL 2020 Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, Byron C. Wallace

In NLP this often entails extracting snippets of an input text `responsible for' corresponding model output; when such a snippet comprises tokens that indeed informed the model's prediction, it is a faithful explanation.

Feature Importance text-classification +1

ERASER: A Benchmark to Evaluate Rationalized NLP Models

2 code implementations ACL 2020 Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, Byron C. Wallace

We propose several metrics that aim to capture how well the rationales provided by models align with human rationales, and also how faithful these rationales are (i. e., the degree to which provided rationales influenced the corresponding predictions).

SUPP.AI: Finding Evidence for Supplement-Drug Interactions

1 code implementation ACL 2020 Lucy Lu Wang, Oyvind Tafjord, Arman Cohan, Sarthak Jain, Sam Skjonsberg, Carissa Schoenick, Nick Botner, Waleed Ammar

We fine-tune the contextualized word representations of the RoBERTa language model using labeled DDI data, and apply the fine-tuned model to identify supplement interactions.

General Classification Language Modelling

Learning to Identify Patients at Risk of Uncontrolled Hypertension Using Electronic Health Records Data

no code implementations28 Jun 2019 Ramin Mohammadi, Sarthak Jain, Stephen Agboola, Ramya Palacholla, Sagar Kamarthi, Byron C. Wallace

We develop machine learning models (logistic regression and recurrent neural networks) to stratify patients with respect to the risk of exhibiting uncontrolled hypertension within the coming three-month period.

Management regression

An Analysis of Attention over Clinical Notes for Predictive Tasks

no code implementations WS 2019 Sarthak Jain, Ramin Mohammadi, Byron C. Wallace

In this work we perform experiments to explore this question using two EMR corpora and four different predictive tasks, that: (i) inclusion of attention mechanisms is critical for neural encoder modules that operate over notes fields in order to yield competitive performance, but, (ii) unfortunately, while these boost predictive performance, it is decidedly less clear whether they provide meaningful support for predictions.

Learning Disentangled Representations of Texts with Application to Biomedical Abstracts

1 code implementation EMNLP 2018 Sarthak Jain, Edward Banner, Jan-Willem van de Meent, Iain J. Marshall, Byron C. Wallace

We propose a method for learning disentangled representations of texts that code for distinct and complementary aspects, with the aim of affording efficient model transfer and interpretability.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.