The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law

2 May 2022  ·  Sandra Wachter ·

Artificial Intelligence (AI) is increasingly used to make important decisions about people. While issues of AI bias and proxy discrimination are well explored, less focus has been paid to the harms created by profiling based on groups that do not map to or correlate with legally protected groups such as sex or ethnicity. This raises a question: are existing equality laws able to protect against emergent AI-driven inequality? This article examines the legal status of algorithmic groups in North American and European non-discrimination doctrine, law, and jurisprudence and will show that algorithmic groups are not comparable to traditional protected groups. Nonetheless, these new groups are worthy of protection. I propose a new theory of harm - "the theory of artificial immutability" - that aims to bring AI groups within the scope of the law. My theory describes how algorithmic groups act as de facto immutable characteristics in practice that limit people's autonomy and prevent them from achieving important goals.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here