Digital Normativity: A challenge for human subjectivization and free will

23 May 2019  ·  Éric Fourneret, Blaise Yvert ·

Over the past decade, artificial intelligence has demonstrated its efficiency in many different applications and a huge number of algorithms have become central and ubiquitous in our life. Their growing interest is essentially based on their capability to synthesize and process large amounts of data, and to help humans making decisions in a world of increasing complexity. Yet, the effectiveness of algorithms in bringing more and more relevant recommendations to humans may start to compete with human-alone decisions based on values other than pure efficacy. Here, we examine this tension in light of the emergence of several forms of digital normativity, and analyze how this normative role of AI may influence the ability of humans to remain subject of their life. The advent of AI technology imposes a need to achieve a balance between concrete material progress and progress of the mind to avoid any form of servitude. It has become essential that an ethical reflection accompany the current developments of intelligent algorithms beyond the sole question of their social acceptability. Such reflection should be anchored where AI technologies are being developed as well as in educational programs where their implications can be explained.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here