3 code implementations • 2 Mar 2020 • Chirag Nagpal, Xinyu Rachel Li, Artur Dubrawski
We describe a new approach to estimating relative risks in time-to-event prediction problems with censored data in a fully parametric manner.
4 code implementations • 16 Jan 2021 • Chirag Nagpal, Steve Yadlowsky, Negar Rostamzadeh, Katherine Heller
Survival analysis is a challenging variation of regression modeling because of the presence of censoring, where the outcome measurement is only partially known, due to, for example, loss to follow up.
2 code implementations • 22 Feb 2022 • Chirag Nagpal, Mononito Goswami, Keith Dufendach, Artur Dubrawski
Estimation of treatment efficacy of real-world clinical interventions involves working with continuous outcomes such as time-to-death, re-hospitalization, or a composite event that may be subject to censoring.
2 code implementations • 15 Apr 2022 • Chirag Nagpal, Willa Potosnak, Artur Dubrawski
Applications of machine learning in healthcare often require working with time-to-event prediction tasks including prognostication of an adverse event, re-hospitalization or death.
1 code implementation • 14 May 2019 • Chirag Nagpal, Rohan Sangave, Amit Chahar, Parth Shah, Artur Dubrawski, Bhiksha Raj
Semi-parametric survival analysis methods like the Cox Proportional Hazards (CPH) regression (Cox, 1972) are a popular approach for survival analysis.
no code implementations • 23 Jun 2017 • Abhilasha Ravichander, Shruti Rijhwani, Rajat Kulshreshtha, Chirag Nagpal, Tadas Baltrušaitis, Louis-Philippe Morency
In this work, we focus on improving learning for such hierarchical models and demonstrate our method on the task of speaker trait prediction.
no code implementations • 8 May 2019 • Chirag Nagpal, Dennis Wei, Bhanukiran Vinzamuri, Monica Shekhar, Sara E. Berger, Subhro Das, Kush R. Varshney
The dearth of prescribing guidelines for physicians is one key driver of the current opioid epidemic in the United States.
no code implementations • 14 Apr 2020 • Chirag Nagpal, Robert E. Tillman, Prashant Reddy, Manuela Veloso
We consider the problem of aggregating predictions or measurements from a set of human forecasters, models, sensors or other instruments which may be subject to bias or miscalibration and random heteroscedastic noise.
no code implementations • 24 Feb 2023 • Chirag Nagpal, Vedant Sanil, Artur Dubrawski
In this paper we propose a statistical approach to recovering sparse phenogroups (or subtypes) that demonstrate differential treatment effects as compared to the study population.
no code implementations • 14 Dec 2023 • Jacob Eisenstein, Chirag Nagpal, Alekh Agarwal, Ahmad Beirami, Alex D'Amour, DJ Dvijotham, Adam Fisch, Katherine Heller, Stephen Pfohl, Deepak Ramachandran, Peter Shaw, Jonathan Berant
However, even pretrain reward ensembles do not eliminate reward hacking: we show several qualitative reward hacking phenomena that are not mitigated by ensembling because all reward models in the ensemble exhibit similar error patterns.
no code implementations • 3 Jan 2024 • Ahmad Beirami, Alekh Agarwal, Jonathan Berant, Alexander D'Amour, Jacob Eisenstein, Chirag Nagpal, Ananda Theertha Suresh
A commonly used analytical expression in the literature claims that the KL divergence between the best-of-$n$ policy and the base policy is equal to $\log (n) - (n-1)/n.$ We disprove the validity of this claim, and show that it is an upper bound on the actual KL divergence.
no code implementations • 1 Feb 2024 • ZiHao Wang, Chirag Nagpal, Jonathan Berant, Jacob Eisenstein, Alex D'Amour, Sanmi Koyejo, Victor Veitch
A common approach for aligning language models to human preferences is to first learn a reward model from preference data, and then use this reward model to update the language model.
no code implementations • 20 Feb 2024 • Kristian Lum, Jacy Reese Anthis, Chirag Nagpal, Alexander D'Amour
In this work, we study the correspondence between such decontextualized "trick tests" and evaluations that are more grounded in Realistic Use and Tangible {Effects (i. e. RUTEd evaluations).
no code implementations • 5 Mar 2024 • Mercy Asiedu, Awa Dieng, Iskandar Haykel, Negar Rostamzadeh, Stephen Pfohl, Chirag Nagpal, Maria Nagawa, Abigail Oppong, Sanmi Koyejo, Katherine Heller
Whereas experts generally expressed a shared view about the relevance of colonial history for the development and implementation of AI technologies in Africa, the majority of the general population participants surveyed did not think there was a direct link between AI and colonialism.
no code implementations • 18 Mar 2024 • Stephen R. Pfohl, Heather Cole-Lewis, Rory Sayres, Darlene Neal, Mercy Asiedu, Awa Dieng, Nenad Tomasev, Qazi Mamunur Rashid, Shekoofeh Azizi, Negar Rostamzadeh, Liam G. McCoy, Leo Anthony Celi, Yun Liu, Mike Schaekermann, Alanna Walton, Alicia Parrish, Chirag Nagpal, Preeti Singh, Akeiylah Dewitt, Philip Mansfield, Sushant Prakash, Katherine Heller, Alan Karthikesalingam, Christopher Semturs, Joelle Barral, Greg Corrado, Yossi Matias, Jamila Smith-Loud, Ivor Horn, Karan Singhal
Large language models (LLMs) hold immense promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities.