Search Results for author: Michael Veale

Found 13 papers, 5 papers with code

Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries

no code implementations21 Nov 2023 Robert Gorwa, Michael Veale

The AI development community is increasingly making use of hosting intermediaries such as Hugging Face provide easy access to user-uploaded models and training data.

Understanding accountability in algorithmic supply chains

no code implementations28 Apr 2023 Jennifer Cobbe, Michael Veale, Jatinder Singh

Academic and policy proposals on algorithmic accountability often seek to understand algorithmic systems in their socio-technical context, recognising that they are produced by 'many hands'.

Demystifying the Draft EU Artificial Intelligence Act

no code implementations8 Jul 2021 Michael Veale, Frederik Zuiderveen Borgesius

In April 2021, the European Commission proposed a Regulation on Artificial Intelligence, known as the AI Act.

Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence

1 code implementation8 Jan 2020 Midas Nouwens, Ilaria Liccardi, Michael Veale, David Karger, Lalana Kagal

New consent management platforms (CMPs) have been introduced to the web to conform with the EU's General Data Protection Regulation, particularly its requirements for consent when companies collect and process users' personal data.

Human-Computer Interaction Computers and Society

Algorithms that Remember: Model Inversion Attacks and Data Protection Law

no code implementations12 Jul 2018 Michael Veale, Reuben Binns, Lilian Edwards

Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms.

Blind Justice: Fairness with Encrypted Sensitive Attributes

1 code implementation ICML 2018 Niki Kilbertus, Adrià Gascón, Matt J. Kusner, Michael Veale, Krishna P. Gummadi, Adrian Weller

Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race.

Fairness

Enslaving the Algorithm: From a "Right to an Explanation" to a "Right to Better Decisions"?

no code implementations20 Mar 2018 Lilian Edwards, Michael Veale

As concerns about unfairness and discrimination in "black box" machine learning systems rise, a legal "right to an explanation" has emerged as a compellingly attractive approach for challenge and redress.

Some HCI Priorities for GDPR-Compliant Machine Learning

no code implementations16 Mar 2018 Michael Veale, Reuben Binns, Max Van Kleek

In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems.

BIG-bench Machine Learning Fairness

Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making

no code implementations3 Feb 2018 Michael Veale, Max Van Kleek, Reuben Binns

Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions---like taxation, justice, and child protection---are now commonplace.

BIG-bench Machine Learning Decision Making +1

Like trainer, like bot? Inheritance of bias in algorithmic content moderation

1 code implementation5 Jul 2017 Reuben Binns, Michael Veale, Max Van Kleek, Nigel Shadbolt

This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence.

Navigate

Cannot find the paper you are looking for? You can Submit a new open access paper.