no code implementations • 8 Jul 2021 • Michael Veale, Frederik Zuiderveen Borgesius
In April 2021, the European Commission proposed a Regulation on Artificial Intelligence, known as the AI Act.
3 code implementations • 25 May 2020 • Carmela Troncoso, Mathias Payer, Jean-Pierre Hubaux, Marcel Salathé, James Larus, Edouard Bugnion, Wouter Lueks, Theresa Stadler, Apostolos Pyrgelis, Daniele Antonioli, Ludovic Barman, Sylvain Chatel, Kenneth Paterson, Srdjan Čapkun, David Basin, Jan Beutel, Dennis Jackson, Marc Roeschlin, Patrick Leu, Bart Preneel, Nigel Smart, Aysajan Abidin, Seda Gürses, Michael Veale, Cas Cremers, Michael Backes, Nils Ole Tippenhauer, Reuben Binns, Ciro Cattuto, Alain Barrat, Dario Fiore, Manuel Barbosa, Rui Oliveira, José Pereira
This document describes and analyzes a system for secure and privacy-preserving proximity tracing at large scale.
Cryptography and Security Computers and Society
1 code implementation • 8 Jan 2020 • Midas Nouwens, Ilaria Liccardi, Michael Veale, David Karger, Lalana Kagal
New consent management platforms (CMPs) have been introduced to the web to conform with the EU's General Data Protection Regulation, particularly its requirements for consent when companies collect and process users' personal data.
Human-Computer Interaction Computers and Society
2 code implementations • 2 Jun 2019 • Bogdan Kulynych, Mohammad Yaghini, Giovanni Cherubin, Michael Veale, Carmela Troncoso
Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model.
no code implementations • 12 Jul 2018 • Michael Veale, Reuben Binns, Lilian Edwards
Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms.
1 code implementation • ICML 2018 • Niki Kilbertus, Adrià Gascón, Matt J. Kusner, Michael Veale, Krishna P. Gummadi, Adrian Weller
Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race.
no code implementations • 20 Mar 2018 • Lilian Edwards, Michael Veale
As concerns about unfairness and discrimination in "black box" machine learning systems rise, a legal "right to an explanation" has emerged as a compellingly attractive approach for challenge and redress.
no code implementations • 16 Mar 2018 • Michael Veale, Reuben Binns, Max Van Kleek
In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems.
no code implementations • 3 Feb 2018 • Michael Veale, Max Van Kleek, Reuben Binns
Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions---like taxation, justice, and child protection---are now commonplace.
1 code implementation • 5 Jul 2017 • Reuben Binns, Michael Veale, Max Van Kleek, Nigel Shadbolt
This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence.
no code implementations • 19 Jun 2017 • Michael Veale
Machine learning systems are increasingly used to support public sector decision-making across a variety of sectors.