1 code implementation • 31 Jul 2024 • Elan Markowitz, Anil Ramakrishna, Jwala Dhamala, Ninareh Mehrabi, Charith Peris, Rahul Gupta, Kai-Wei Chang, Aram Galstyan
Knowledge graphs (KGs) complement Large Language Models (LLMs) by providing reliable, structured, domain-specific, and up-to-date external knowledge.
no code implementations • 19 Dec 2023 • Anaelia Ovalle, Ninareh Mehrabi, Palash Goyal, Jwala Dhamala, Kai-Wei Chang, Richard Zemel, Aram Galstyan, Yuval Pinter, Rahul Gupta
Our paper is the first to link LLM misgendering to tokenization and deficient neopronoun grammar, indicating that LLMs unable to correctly treat neopronouns as pronouns are more prone to misgender.
no code implementations • 16 Nov 2023 • Ninareh Mehrabi, Palash Goyal, Anil Ramakrishna, Jwala Dhamala, Shalini Ghosh, Richard Zemel, Kai-Wei Chang, Aram Galstyan, Rahul Gupta
With the recent surge of language models in different applications, attention to safety and robustness of these models has gained significant importance.
no code implementations • 17 May 2023 • Anaelia Ovalle, Palash Goyal, Jwala Dhamala, Zachary Jaggers, Kai-Wei Chang, Aram Galstyan, Richard Zemel, Rahul Gupta
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
no code implementations • 15 Dec 2022 • Caleb Ziems, William Held, Jingfeng Yang, Jwala Dhamala, Rahul Gupta, Diyi Yang
First, we use this system to stress tests question answering, machine translation, and semantic parsing.
no code implementations • 17 Nov 2022 • Ninareh Mehrabi, Palash Goyal, Apurv Verma, Jwala Dhamala, Varun Kumar, Qian Hu, Kai-Wei Chang, Richard Zemel, Aram Galstyan, Rahul Gupta
Natural language often contains ambiguities that can lead to misinterpretation and miscommunication.
no code implementations • 7 Oct 2022 • Jwala Dhamala, Varun Kumar, Rahul Gupta, Kai-Wei Chang, Aram Galstyan
We present a systematic analysis of the impact of decoding algorithms on LM fairness, and analyze the trade-off between fairness, diversity and quality.
no code implementations • ACL 2022 • Yang Trista Cao, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta, Varun Kumar, Jwala Dhamala, Aram Galstyan
Multiple metrics have been introduced to measure fairness in various natural language processing tasks.
no code implementations • Findings (ACL) 2022 • Umang Gupta, Jwala Dhamala, Varun Kumar, Apurv Verma, Yada Pruksachatkun, Satyapriya Krishna, Rahul Gupta, Kai-Wei Chang, Greg Ver Steeg, Aram Galstyan
Language models excel at generating coherent text, and model compression techniques such as knowledge distillation have enabled their use in resource-constrained settings.
no code implementations • ACL 2022 • Satyapriya Krishna, Rahul Gupta, Apurv Verma, Jwala Dhamala, Yada Pruksachatkun, Kai-Wei Chang
With the rapid growth in language processing applications, fairness has emerged as an important consideration in data-driven solutions.
no code implementations • 13 Oct 2021 • Md Shakil Zaman, Jwala Dhamala, Pradeep Bajracharya, John L. Sapp, B. Milan Horacek, Katherine C. Wu, Natalia A. Trayanova, Linwei Wang
In this paper, we present a Bayesian active learning method to directly approximate the posterior pdf function of cardiac model parameters, in which we intelligently select training points to query the simulation model in order to learn the posterior pdf using a small number of samples.
no code implementations • Findings (ACL) 2021 • Yada Pruksachatkun, Satyapriya Krishna, Jwala Dhamala, Rahul Gupta, Kai-Wei Chang
Existing bias mitigation methods to reduce disparities in model outcomes across cohorts have focused on data augmentation, debiasing model embeddings, or adding fairness-based optimization objectives during training.
1 code implementation • 27 Jan 2021 • Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta
To systematically study and benchmark social biases in open-ended language generation, we introduce the Bias in Open-Ended Language Generation Dataset (BOLD), a large-scale dataset that consists of 23, 679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion, and political ideology.
no code implementations • EMNLP (insights) 2020 • Ansel MacLaughlin, Jwala Dhamala, Anoop Kumar, Sriram Venkatapathy, Ragav Venkatesan, Rahul Gupta
Neural Architecture Search (NAS) methods, which automatically learn entire neural model or individual neural cell architectures, have recently achieved competitive or state-of-the-art (SOTA) performance on variety of natural language processing and computer vision tasks, including language modeling, natural language inference, and image classification.
no code implementations • 18 Jul 2020 • Xiajun Jiang, Sandesh Ghimire, Jwala Dhamala, Zhiyuan Li, Prashnna Kumar Gyawali, Linwei Wang
However, many reconstruction problems involve imaging physics that are dependent on the underlying non-Euclidean geometry.
no code implementations • 2 Jun 2020 • Jwala Dhamala, John L. Sapp, B. Milan Horácek, Linwei Wang
However, by sampling from an approximation of the exact posterior probability density function (pdf) of the parameters, the efficiency is gained at the expense of sampling accuracy.
no code implementations • 15 May 2020 • Jwala Dhamala, Sandesh Ghimire, John L. Sapp, B. Milan Horácek, Linwei Wang
The estimation of patient-specific tissue properties in the form of model parameters is important for personalized physiological models.
1 code implementation • 1 Jul 2019 • Jwala Dhamala, Sandesh Ghimire, John L. Sapp, B. Milan Horacek, Linwei Wang
In this paper, we present a novel graph convolutional VAE to allow generative modeling of non-Euclidean data, and utilize it to embed Bayesian optimization of large graphs into a small latent space.
no code implementations • 12 May 2019 • Sandesh Ghimire, Jwala Dhamala, Prashnna Kumar Gyawali, John L. Sapp, B. Milan Horacek, Linwei Wang
We introduce a novel model-constrained inference framework that replaces conventional physiological models with a deep generative model trained to generate TMP sequences from low-dimensional generative factors.
1 code implementation • 5 Mar 2019 • Sandesh Ghimire, Prashnna Kumar Gyawali, Jwala Dhamala, John L. Sapp, Milan Horacek, Linwei Wang
Deep learning networks have shown state-of-the-art performance in many image reconstruction problems.
no code implementations • 14 Nov 2018 • Jwala Dhamala, Emmanuel Azuh, Abdullah Al-Dujaili, Jonathan Rubin, Una-May O'Reilly
Timely prediction of clinically critical events in Intensive Care Unit (ICU) is important for improving care and survival rate.