Search Results for author: Kamrun Naher Keya

Found 6 papers, 2 papers with code

An Intersectional Definition of Fairness

2 code implementations22 Jul 2018 James Foulds, Rashidul Islam, Kamrun Naher Keya, SHimei Pan

We propose definitions of fairness in machine learning and artificial intelligence systems that are informed by the framework of intersectionality, a critical lens arising from the Humanities literature which analyzes how interlocking systems of power and oppression affect individuals along overlapping dimensions including gender, race, sexual orientation, class, and disability.

BIG-bench Machine Learning Fairness

Fair Representation Learning for Heterogeneous Information Networks

1 code implementation18 Apr 2021 Ziqian Zeng, Rashidul Islam, Kamrun Naher Keya, James Foulds, Yangqiu Song, SHimei Pan

Recently, much attention has been paid to the societal impact of AI, especially concerns regarding its fairness.

Fairness Representation Learning

Neural Embedding Allocation: Distributed Representations of Topic Models

no code implementations10 Sep 2019 Kamrun Naher Keya, Yannis Papanikolaou, James R. Foulds

Word embedding models such as the skip-gram learn vector representations of words' semantic relationships, and document embedding models learn similar representations for documents.

Document Embedding Topic Models

Neural Fair Collaborative Filtering

no code implementations2 Sep 2020 Rashidul Islam, Kamrun Naher Keya, Ziqian Zeng, SHimei Pan, James Foulds

A growing proportion of human interactions are digitized on social media platforms and subjected to algorithmic decision-making, and it has become increasingly important to ensure fair treatment from these algorithms.

Collaborative Filtering Decision Making +1

Equitable Allocation of Healthcare Resources with Fair Cox Models

no code implementations14 Oct 2020 Kamrun Naher Keya, Rashidul Islam, SHimei Pan, Ian Stockwell, James R. Foulds

Healthcare programs such as Medicaid provide crucial services to vulnerable populations, but due to limited resources, many of the individuals who need these services the most languish on waiting lists.

Fairness

User Acceptance of Gender Stereotypes in Automated Career Recommendations

no code implementations13 Jun 2021 Clarice Wang, Kathryn Wang, Andrew Bian, Rashidul Islam, Kamrun Naher Keya, James Foulds, SHimei Pan

In other words, our results demonstrate we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.