no code implementations • 11 Feb 2025 • Dongkyu Cho, Taesup Moon, Rumi Chunara, Kyunghyun Cho, Sungmin Cha
Continual learning (CL) research typically assumes highly constrained exemplar memory resources.
no code implementations • 15 Mar 2024 • Miao Zhang, Rumi Chunara
Performance disparities of image recognition across different demographic populations are known to exist in deep learning-based models, but previous work has largely addressed such fairness problems assuming knowledge of sensitive attribute labels.
no code implementations • 13 Mar 2024 • Vishwali Mhasawade, Rumi Chunara
We study this issue of missing mediators, motivated by challenges in public health, wherein mediators can be missing, not at random.
no code implementations • 14 Feb 2024 • Salman Rahman, Lavender Yao Jiang, Saadia Gabriel, Yindalon Aphinyanaphongs, Eric Karl Oermann, Rumi Chunara
Overall, this study provides new insights for enhancing the deployment of large language models in the societally important domain of healthcare, and improving their performance for broader populations.
no code implementations • 8 Feb 2024 • Miao Zhang, Salman Rahman, Vishwali Mhasawade, Rumi Chunara
Relevant to such uses, important examples of bias in the use of AI are evident when decision-making based on data fails to account for the robustness of the data, or predictions are based on spurious correlations.
no code implementations • 25 Jan 2024 • Vishwali Mhasawade, Salman Rahman, Zoe Haskell-Craig, Rumi Chunara
Previous work has highlighted that existing post-hoc explanation methods exhibit disparities in explanation fidelity (across 'race' and 'gender' as sensitive attributes), and while a large body of work focuses on mitigating these issues at the explanation metric level, the role of the data generating process and black box model in relation to explanation disparities remains largely unexplored.
1 code implementation • 7 Dec 2023 • Harvineet Singh, Fan Xia, Mi-Ok Kim, Romain Pirracchio, Rumi Chunara, Jean Feng
In fairness audits, a standard objective is to detect whether a given algorithm performs substantially differently between subgroups.
1 code implementation • 16 Nov 2022 • Miao Zhang, Rumi Chunara
We propose fair dense representation with contrastive learning (FairDCL) as a method for de-biasing the multi-level latent space of convolution neural network models.
no code implementations • 9 Apr 2022 • Miao Zhang, Harvineet Singh, Lazarus Chok, Rumi Chunara
This work highlights the need to conduct fairness analysis for satellite imagery segmentation models and motivates the development of methods for fair transfer learning in order not to introduce disparities between places, particularly urban and rural locations.
no code implementations • 15 Nov 2020 • Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, Alice Xiang
Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders.
no code implementations • 14 Oct 2020 • Vishwali Mhasawade, Rumi Chunara
While work in algorithmic fairness to-date has primarily focused on addressing discrimination due to individually linked attributes, social science research elucidates how some properties we link to individuals can be conceptualized as having causes at macro (e. g. structural) levels, and it may be important to be fair to attributes at multiple levels.
no code implementations • 21 Jul 2020 • Vishwali Mhasawade, Yuan Zhao, Rumi Chunara
Research in population and public health focuses on the mechanisms between different cultural, social, and environmental factors and their effect on the health, of not just individuals, but communities as a whole.
no code implementations • 2 Nov 2019 • Harvineet Singh, Rina Singh, Vishwali Mhasawade, Rumi Chunara
We study the problem of learning fair prediction models for unseen test sets distributed differently from the train set.
1 code implementation • 24 Aug 2019 • Vishwali Mhasawade, Nabeel Abdur Rehman, Rumi Chunara
Based on sources of stability in the model, we posit that for human-sourced data and health prediction tasks we can combine environment and population information in a novel population-aware hierarchical Bayesian domain adaptation framework that harnesses multiple invariant components through population attributes when needed.
no code implementations • 24 Aug 2019 • Mohammad Akbari, Rumi Chunara
Person-generated data sources, such as actively contributed surveys as well as passively mined data from social media offer opportunity to capture such context, however the self-reported nature and sparsity of such data mean that such data are noisier and less specific than physiological measures such as blood glucose values themselves.
no code implementations • 3 Apr 2019 • Nabeel Abdur Rehman, Umar Saif, Rumi Chunara
We then incorporate landscape features from satellite image data from Pakistan, labelled using the CNN, in a well-known Susceptible-Infectious-Recovered epidemic model, alongside dengue case data from 2012-2016 in Pakistan.
no code implementations • 3 Dec 2018 • Mohammad Akbari, Kunal Relia, Anas Elghafari, Rumi Chunara
Online communities provide a unique way for individuals to access information from those in similar circumstances, which can be critical for health conditions that require daily and personalized management.
no code implementations • 21 Nov 2018 • Vishwali Mhasawade, Nabeel Abdur Rehman, Rumi Chunara
Population attributes are essential in health for understanding who the data represents and precision medicine efforts.
no code implementations • 22 Jun 2018 • Nabeel Abdur Rehman, Maxwell Matthaios Aliapoulios, Disha Umarwani, Rumi Chunara
Acute respiratory infections have epidemic and pandemic potential and thus are being studied worldwide, albeit in many different contexts and study formats.