While work in algorithmic fairness to-date has primarily focused on addressing discrimination due to individually linked attributes, social science research elucidates how some properties we link to individuals can be conceptualized as having causes at macro (e. g. structural) levels, and it may be important to be fair to attributes at multiple levels.
Research in population and public health focuses on the mechanisms between different cultural, social, and environmental factors and their effect on the health, of not just individuals, but communities as a whole.
We study the problem of learning fair prediction models for unseen test sets distributed differently from the train set.
Based on sources of stability in the model, we posit that for human-sourced data and health prediction tasks we can combine environment and population information in a novel population-aware hierarchical Bayesian domain adaptation framework that harnesses multiple invariant components through population attributes when needed.
Population attributes are essential in health for understanding who the data represents and precision medicine efforts.
In this work, we investigate whether the learnability bias exhibited by children is in part due to the distribution of quantifiers in natural language.