no code implementations • 20 Jan 2021 • Nitin Agrawal, Reuben Binns, Max Van Kleek, Kim Laine, Nigel Shadbolt
Homomorphic encryption, secure multi-party computation, and differential privacy are part of an emerging class of Privacy Enhancing Technologies which share a common promise: to preserve privacy whilst also obtaining the benefits of computational analysis.
3 code implementations • 25 May 2020 • Carmela Troncoso, Mathias Payer, Jean-Pierre Hubaux, Marcel Salathé, James Larus, Edouard Bugnion, Wouter Lueks, Theresa Stadler, Apostolos Pyrgelis, Daniele Antonioli, Ludovic Barman, Sylvain Chatel, Kenneth Paterson, Srdjan Čapkun, David Basin, Jan Beutel, Dennis Jackson, Marc Roeschlin, Patrick Leu, Bart Preneel, Nigel Smart, Aysajan Abidin, Seda Gürses, Michael Veale, Cas Cremers, Michael Backes, Nils Ole Tippenhauer, Reuben Binns, Ciro Cattuto, Alain Barrat, Dario Fiore, Manuel Barbosa, Rui Oliveira, José Pereira
This document describes and analyzes a system for secure and privacy-preserving proximity tracing at large scale.
Cryptography and Security Computers and Society
no code implementations • 14 Dec 2019 • Reuben Binns
It draws on theoretical discussions from within the fair machine learning research, and from political and legal philosophy, to argue that individual and group fairness are not fundamentally in conflict.
no code implementations • 12 Jul 2018 • Michael Veale, Reuben Binns, Lilian Edwards
Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms.
no code implementations • 16 Mar 2018 • Michael Veale, Reuben Binns, Max Van Kleek
In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems.
no code implementations • 3 Feb 2018 • Michael Veale, Max Van Kleek, Reuben Binns
Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions---like taxation, justice, and child protection---are now commonplace.
no code implementations • 10 Dec 2017 • Reuben Binns
What does it mean for a machine learning model to be `fair', in terms which can be operationalised?
Computers and Society
1 code implementation • 5 Jul 2017 • Reuben Binns, Michael Veale, Max Van Kleek, Nigel Shadbolt
This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence.