Search Results for author: Reuben Binns

Found 9 papers, 2 papers with code

Unlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms

no code implementations22 Apr 2024 Hilde Weerts, Aislinn Kelly-Lyth, Reuben Binns, Jeremias Adams-Prassl

In this paper, we focus on the most likely candidate for direct discrimination in the algorithmic context, termed inherent direct discrimination, where a proxy is inextricably linked to a protected characteristic.

Decision Making

Exploring Design and Governance Challenges in the Development of Privacy-Preserving Computation

no code implementations20 Jan 2021 Nitin Agrawal, Reuben Binns, Max Van Kleek, Kim Laine, Nigel Shadbolt

Homomorphic encryption, secure multi-party computation, and differential privacy are part of an emerging class of Privacy Enhancing Technologies which share a common promise: to preserve privacy whilst also obtaining the benefits of computational analysis.

Human-Computer Interaction

On the Apparent Conflict Between Individual and Group Fairness

no code implementations14 Dec 2019 Reuben Binns

It draws on theoretical discussions from within the fair machine learning research, and from political and legal philosophy, to argue that individual and group fairness are not fundamentally in conflict.

BIG-bench Machine Learning Fairness +2

Algorithms that Remember: Model Inversion Attacks and Data Protection Law

no code implementations12 Jul 2018 Michael Veale, Reuben Binns, Lilian Edwards

Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms.

Some HCI Priorities for GDPR-Compliant Machine Learning

no code implementations16 Mar 2018 Michael Veale, Reuben Binns, Max Van Kleek

In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems.

BIG-bench Machine Learning Fairness

Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making

no code implementations3 Feb 2018 Michael Veale, Max Van Kleek, Reuben Binns

Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions---like taxation, justice, and child protection---are now commonplace.

BIG-bench Machine Learning Decision Making +1

Fairness in Machine Learning: Lessons from Political Philosophy

no code implementations10 Dec 2017 Reuben Binns

What does it mean for a machine learning model to be `fair', in terms which can be operationalised?

Computers and Society

Like trainer, like bot? Inheritance of bias in algorithmic content moderation

1 code implementation5 Jul 2017 Reuben Binns, Michael Veale, Max Van Kleek, Nigel Shadbolt

This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence.

Navigate

Cannot find the paper you are looking for? You can Submit a new open access paper.