Search Results for author: Parminder Bhatia

Found 38 papers, 5 papers with code

soc2seq: Social Embedding meets Conversation Model

1 code implementation17 Feb 2017 Parminder Bhatia, Marsal Gavalda, Arash Einolghozati

While liking or upvoting a post on a mobile app is easy to do, replying with a written note is much more difficult, due to both the cognitive load of coming up with a meaningful response as well as the mechanics of entering the text.

Multi-lingual Evaluation of Code Generation Models

2 code implementations26 Oct 2022 Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, Sujan Kumar Gonugondla, Hantian Ding, Varun Kumar, Nathan Fulton, Arash Farahani, Siddhartha Jain, Robert Giaquinto, Haifeng Qian, Murali Krishna Ramanathan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang

Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings.

Code Completion Code Translation +1

DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization

2 code implementations ACL 2022 Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew Arnold, Bing Xiang, Dan Roth

Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16. 5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets.

Knowledge Distillation Model Compression +2

ReCode: Robustness Evaluation of Code Generation Models

2 code implementations20 Dec 2022 Shiqi Wang, Zheng Li, Haifeng Qian, Chenghao Yang, Zijian Wang, Mingyue Shang, Varun Kumar, Samson Tan, Baishakhi Ray, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Dan Roth, Bing Xiang

Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation.

Code Generation

An Empirical Investigation Towards Efficient Multi-Domain Language Model Pre-training

1 code implementation EMNLP 2020 Kristjan Arumae, Qing Sun, Parminder Bhatia

However, in order to achieve state-of-the-art performance on out of domain tasks such as clinical named entity recognition and relation extraction, additional in domain pre-training is required.

Clustering Language Modelling +4

Dynamic Transfer Learning for Named Entity Recognition

no code implementations13 Dec 2018 Parminder Bhatia, Kristjan Arumae, Busra Celikkaya

We complement a standard hierarchical NER model with a general transfer learning framework consisting of parameter sharing between the source and target tasks, and showcase scores significantly above the baseline architecture.

Model Optimization named-entity-recognition +3

Joint Entity Extraction and Assertion Detection for Clinical Text

no code implementations ACL 2019 Parminder Bhatia, Busra Celikkaya, Mohammed Khalilia

Most of the existing systems treat this task as a pipeline of two separate tasks, i. e., named entity recognition (NER) and rule-based negation detection.

Entity Extraction using GAN named-entity-recognition +4

Relation Extraction using Explicit Context Conditioning

no code implementations NAACL 2019 Gaurav Singh, Parminder Bhatia

Most current RE models learn context-aware representations of the target entities that are then used to establish relation between them.

Relation Relation Extraction +1

Comprehend Medical: a Named Entity Recognition and Relationship Extraction Web Service

no code implementations15 Oct 2019 Parminder Bhatia, Busra Celikkaya, Mohammed Khalilia, Selvan Senthivel

Comprehend Medical is a stateless and Health Insurance Portability and Accountability Act (HIPAA) eligible Named Entity Recognition (NER) and Relationship Extraction (RE) service launched under Amazon Web Services (AWS) trained using state-of-the-art deep learning models.

Anatomy named-entity-recognition +3

LATTE: Latent Type Modeling for Biomedical Entity Linking

no code implementations21 Nov 2019 Ming Zhu, Busra Celikkaya, Parminder Bhatia, Chandan K. Reddy

This is of significant importance in the biomedical domain, where it could be used to semantically annotate a large volume of clinical records and biomedical literature, to standardized concepts described in an ontology such as Unified Medical Language System (UMLS).

Entity Disambiguation Entity Linking +1

CALM: Continuous Adaptive Learning for Language Modeling

no code implementations8 Apr 2020 Kristjan Arumae, Parminder Bhatia

Training large language representation models has become a standard in the natural language processing community.

Continual Learning Language Modelling

Towards User Friendly Medication Mapping Using Entity-Boosted Two-Tower Neural Network

no code implementations17 Jun 2020 Shaoqing Yuan, Parminder Bhatia, Busra Celikkaya, Haiyang Liu, Kyunghwan Choi

Medication name inference is the task of mapping user friendly medication names from a free-form text to a concept in a normalized medication list.

Clustering Descriptive +1

Towards Clinical Encounter Summarization: Learning to Compose Discharge Summaries from Prior Notes

no code implementations27 Apr 2021 Han-Chin Shing, Chaitanya Shivade, Nima Pourdamghani, Feng Nan, Philip Resnik, Douglas Oard, Parminder Bhatia

The records of a clinical encounter can be extensive and complex, thus placing a premium on tools that can extract and summarize relevant information.

Hallucination Informativeness +2

Neural Entity Recognition with Gazetteer based Fusion

no code implementations Findings (ACL) 2021 Qing Sun, Parminder Bhatia

Our gazetteer based fusion model is data efficient, achieving +1. 7 micro-F1 gains on the i2b2 dataset using 20% training data, and brings + 4. 7 micro-F1 gains on novel entity mentions never presented during training.

named-entity-recognition Named Entity Recognition +1

Improving Early Sepsis Prediction with Multi Modal Learning

no code implementations23 Jul 2021 Fred Qin, Vivek Madan, Ujjwal Ratan, Zohar Karnin, Vishaal Kapoor, Parminder Bhatia, Taha Kass-Hout

Clinical text provides essential information to estimate the severity of the sepsis in addition to structured clinical data.

Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey

no code implementations16 Oct 2021 Xiaokai Wei, Shen Wang, Dejiao Zhang, Parminder Bhatia, Andrew Arnold

This new paradigm has revolutionized the entire field of natural language processing, and set the new state-of-the-art performance for a wide variety of NLP tasks.

Kronecker Factorization for Preventing Catastrophic Forgetting in Large-scale Medical Entity Linking

no code implementations11 Nov 2021 Denis Jered McInerney, Luyang Kong, Kristjan Arumae, Byron Wallace, Parminder Bhatia

Elastic Weight Consolidation is a recently proposed method to address this issue, but scaling this approach to the modern large models used in practice requires making strong independence assumptions about model parameters, limiting its effectiveness.

Entity Linking Multi-Task Learning

Debiasing Neural Retrieval via In-batch Balancing Regularization

no code implementations NAACL (GeBNLP) 2022 Yuantong Li, Xiaokai Wei, Zijian Wang, Shen Wang, Parminder Bhatia, Xiaofei Ma, Andrew Arnold

People frequently interact with information retrieval (IR) systems, however, IR models exhibit biases and discrimination towards various demographics.

Fairness Passage Retrieval +1

CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context

no code implementations20 Dec 2022 Yangruibo Ding, Zijian Wang, Wasi Uddin Ahmad, Murali Krishna Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, Bing Xiang

While pre-trained language models (LM) for code have achieved great success in code completion, they generate code conditioned only on the contents within the file, i. e., in-file context, but ignore the rich semantics in other files within the same project, i. e., cross-file context, a critical source of information that is especially useful in modern modular software development.

Code Completion

SonoSAMTrack -- Segment and Track Anything on Ultrasound Images

no code implementations25 Oct 2023 Hariharan Ravishankar, Rohan Patil, Vikram Melapudi, Harsh Suthar, Stephan Anzengruber, Parminder Bhatia, Kass-Hout Taha, Pavan Annangi

In this paper, we present SonoSAMTrack - that combines a promptable foundational model for segmenting objects of interest on ultrasound images called SonoSAM, with a state-of-the art contour tracking model to propagate segmentations on 2D+t and 3D ultrasound datasets.

Knowledge Distillation

One-shot Localization and Segmentation of Medical Images with Foundation Models

no code implementations28 Oct 2023 Deepa Anand, Gurunath Reddy M, Vanika Singhal, Dattesh D. Shanbhag, Shriram KS, Uday Patil, Chitresh Bhushan, Kavitha Manickam, Dawei Gui, Rakesh Mullick, Avinash Gopal, Parminder Bhatia, Taha Kass-Hout

Recent advances in Vision Transformers (ViT) and Stable Diffusion (SD) models with their ability to capture rich semantic features of the image have been used for image correspondence tasks on natural images.

Segmentation Semantic Segmentation

TriSum: Learning Summarization Ability from Large Language Models with Structured Rationale

no code implementations15 Mar 2024 Pengcheng Jiang, Cao Xiao, Zifeng Wang, Parminder Bhatia, Jimeng Sun, Jiawei Han

To overcome this, we introduce TriSum, a framework for distilling LLMs' text summarization abilities into a compact, local model.

Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.