A blindspot of AI ethics: anti-fragility in statistical prediction

21 Jun 2020  ·  Michele Loi, Lonneke van der Plas ·

With this paper, we aim to put an issue on the agenda of AI ethics that in our view is overlooked in the current discourse. The current discussions are dominated by topics suchas trustworthiness and bias, whereas the issue we like to focuson is counter to the debate on trustworthiness. We fear that the overuse of currently dominant AI systems that are driven by short-term objectives and optimized for avoiding error leads to a society that loses its diversity and flexibility needed for true progress. We couch our concerns in the discourse around the term anti-fragility and show with some examples what threats current methods used for decision making pose for society.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here