Search Results for author: Yuriy Brun

Found 7 papers, 3 papers with code

Enforcing Delayed-Impact Fairness Guarantees

no code implementations24 Aug 2022 Aline Weber, Blossom Metevier, Yuriy Brun, Philip S. Thomas, Bruno Castro da Silva

Recent research has shown that seemingly fair machine learning models, when used to inform decisions that have an impact on peoples' lives or well-being (e. g., applications involving education, employment, and lending), can inadvertently increase social inequality in the long term.

Fairness

Fairness Guarantees under Demographic Shift

no code implementations ICLR 2022 Stephen Giguere, Blossom Metevier, Yuriy Brun, Philip S. Thomas, Scott Niekum, Bruno Castro da Silva

Recent studies have demonstrated that using machine learning for social applications can lead to injustice in the form of racist, sexist, and otherwise unfair and discriminatory outcomes.

Fairness

Blindspots in Python and Java APIs Result in Vulnerable Code

no code implementations10 Mar 2021 Yuriy Brun, Tian Lin, Jessie Elise Somerville, Elisha Myers, Natalie C. Ebner

We find that using APIs with blindspots statistically significantly reduces the developers' ability to correctly reason about the APIs in both languages, but that the effect is more pronounced for Python.

Software Engineering Cryptography and Security

Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models

1 code implementation17 Dec 2020 Brittany Johnson, Jesse Bartola, Rico Angell, Katherine Keith, Sam Witty, Stephen J. Giguere, Yuriy Brun

To address bias in machine learning, data scientists need tools that help them understand the trade-offs between model quality and fairness in their specific data domains.

BIG-bench Machine Learning Fairness

Fairness Testing: Testing Software for Discrimination

no code implementations11 Sep 2017 Sainyam Galhotra, Yuriy Brun, Alexandra Meliou

This paper defines software fairness and discrimination and develops a testing-based method for measuring if and how much software discriminates, focusing on causality in discriminatory behavior.

Fairness

Effectiveness of Anonymization in Double-Blind Review

1 code implementation5 Sep 2017 Claire Le Goues, Yuriy Brun, Sven Apel, Emery Berger, Sarfraz Khurshid, Yannis Smaragdakis

Double-blind review relies on the authors' ability and willingness to effectively anonymize their submissions.

Digital Libraries General Literature Software Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.