no code implementations • 8 May 2022 • Hammaad Adam, Ming Ying Yang, Kenrick Cato, Ioana Baldini, Charles Senteio, Leo Anthony Celi, Jiaming Zeng, Moninder Singh, Marzyeh Ghassemi
In this study, we investigate the level of implicit race information available to ML models and human experts and the implications of model-detectable differences in clinical notes.
no code implementations • 9 Mar 2022 • Karan Bhanot, Ioana Baldini, Dennis Wei, Jiaming Zeng, Kristin P. Bennett
In this paper, we evaluate the fairness of models generated on two healthcare datasets for gender and race biases.
no code implementations • 29 Nov 2018 • Jiaming Zeng, Adam Lesnikowski, Jose M. Alvarez
One of the main challenges of deep learning tools is their inability to capture model uncertainty.
no code implementations • 26 Mar 2015 • Jiaming Zeng, Berk Ustun, Cynthia Rudin
We investigate a long-debated question, which is how to create predictive models of recidivism that are sufficiently accurate, transparent, and interpretable to use for decision-making.