Conservative AI and social inequality: Conceptualizing alternatives to bias through social theory

16 Jul 2020  ·  Mike Zajko ·

In response to calls for greater interdisciplinary involvement from the social sciences and humanities in the development, governance, and study of artificial intelligence systems, this paper presents one sociologist's view on the problem of algorithmic bias and the reproduction of societal bias. Discussions of bias in AI cover much of the same conceptual terrain that sociologists studying inequality have long understood using more specific terms and theories. Concerns over reproducing societal bias should be informed by an understanding of the ways that inequality is continually reproduced in society -- processes that AI systems are either complicit in, or can be designed to disrupt and counter. The contrast presented here is between conservative and radical approaches to AI, with conservatism referring to dominant tendencies that reproduce and strengthen the status quo, while radical approaches work to disrupt systemic forms of inequality. The limitations of conservative approaches to class, gender, and racial bias are discussed as specific examples, along with the social structures and processes that biases in these areas are linked to. Societal issues can no longer be out of scope for AI and machine learning, given the impact of these systems on human lives. This requires engagement with a growing body of critical AI scholarship that goes beyond biased data to analyze structured ways of perpetuating inequality, opening up the possibility for radical alternatives.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here