no code implementations • 19 Feb 2024 • David Leslie, Cami Rincon, Morgan Briggs, Antonella Perini, Smera Jayadeva, Ann Borda, SJ Bennett, Christopher Burr, Mhairi Aitken, Michael Katell, Claudia Fischer
AI systems may have transformative and long-term effects on individuals and society.
no code implementations • 19 Feb 2024 • David Leslie, Cami Rincon, Morgan Briggs, Antonella Perini, Smera Jayadeva, Ann Borda, SJ Bennett, Christopher Burr, Mhairi Aitken, Michael Katell, Claudia Fischer, Janis Wong, Ismael Kherroubi Garcia
The sustainability of AI systems depends on the capacity of project teams to proceed with a continuous sensitivity to their potential real-world impacts and transformative effects.
no code implementations • 19 Feb 2024 • David Leslie, Cami Rincon, Morgan Briggs, Antonella Perini, Smera Jayadeva, Ann Borda, SJ Bennett, Christopher Burr, Mhairi Aitken, Michael Katell, Claudia Fischer, Janis Wong, Ismael Kherroubi Garcia
Sustainable AI projects are continuously responsive to the transformative effects as well as short-, medium-, and long-term impacts on individuals and society that the design, development, and deployment of AI technologies may have.
no code implementations • 19 Feb 2024 • David Leslie, Cami Rincon, Morgan Briggs, Antonella Perini, Smera Jayadeva, Ann Borda, SJ Bennett, Christopher Burr, Mhairi Aitken, Michael Katell, Claudia Fischer, Janis Wong, Ismael Kherroubi Garcia
In this workbook, we tackle this challenge by exploring how a context-based and society-centred approach to understanding AI Fairness can help project teams better identify, mitigate, and manage the many ways that unfair bias and discrimination can crop up across the AI project workflow.
no code implementations • 4 Apr 2023 • Robin Mitra, Sarah F. McGough, Tapabrata Chakraborti, Chris Holmes, Ryan Copping, Niels Hagenbuch, Stefanie Biedermann, Jack Noonan, Brieuc Lehmann, Aditi Shenvi, Xuan Vinh Doan, David Leslie, Ginestra Bianconi, Ruben Sanchez-Garcia, Alisha Davies, Maxine Mackintosh, Eleni-Rosalina Andrinopoulou, Anahid Basiri, Chris Harbron, Ben D. MacArthur
Missing data are an unavoidable complication in many machine learning tasks.
no code implementations • 12 Jun 2022 • David Leslie
This article is concerned with setting up practical guardrails within the research activities and environments of CSS.
no code implementations • 12 Apr 2022 • David Leslie, Michael Katell, Mhairi Aitken, Jatinder Singh, Morgan Briggs, Rosamund Powell, Cami Rincón, Antonella Perini, Smera Jayadeva, Christopher Burr
The Advancing Data Justice Research and Practice project aims to broaden understanding of the social, historical, cultural, political, and economic forces that contribute to discrimination and inequity in contemporary ecologies of data collection, governance, and use.
no code implementations • 6 Apr 2022 • David Leslie, Morgan Briggs, Antonella Perini, Smera Jayadeva, Cami Rincón, Noopur Raval, Abeba Birhane, Rosamund Powell, Michael Katell, Mhairi Aitken
The idea of "data justice" is of recent academic vintage.
no code implementations • 6 Apr 2022 • David Leslie, Michael Katell, Mhairi Aitken, Jatinder Singh, Morgan Briggs, Rosamund Powell, Cami Rincón, Thompson Chengeta, Abeba Birhane, Antonella Perini, Smera Jayadeva, Anjali Mazumder
The Advancing Data Justice Research and Practice (ADJRP) project aims to widen the lens of current thinking around data justice and to provide actionable resources that will help policymakers, practitioners, and impacted communities gain a broader understanding of what equitable, freedom-promoting, and rights-sustaining data collection, governance, and use should look like in increasingly dynamic and global data innovation ecosystems.
no code implementations • 6 Feb 2022 • David Leslie, Christopher Burr, Mhairi Aitken, Michael Katell, Morgan Briggs, Cami Rincon
The HUDERAF combines the procedural requirements for principles-based human rights due diligence with the governance mechanisms needed to set up technical and socio-technical guardrails for responsible and trustworthy AI innovation practices.
no code implementations • 3 Jun 2021 • Thomas Pinder, Kathryn Turnbull, Christopher Nemeth, David Leslie
We derive a Matern Gaussian process (GP) on the vertices of a hypergraph.
no code implementations • 30 Apr 2021 • David Leslie, Anjali Mazumder, Aidan Peppin, Maria Wolters, Alexa Hagerty
Among the most damaging characteristics of the covid-19 pandemic has been its disproportionate effect on disadvantaged communities.
no code implementations • 2 Apr 2021 • David Leslie, Christopher Burr, Mhairi Aitken, Josh Cowls, Michael Katell, Morgan Briggs
In September 2019, the Council of Europe's Committee of Ministers adopted the terms of reference for the Ad Hoc Committee on Artificial Intelligence (CAHAI).
no code implementations • 20 Mar 2021 • David Leslie, Morgan Briggs
The goal of the workbook is to summarise some of main themes from Explaining decisions made with AI and then to provide the materials for a workshop exercise that has been built around a use case created to help you gain a flavour of how to put the guidance into practice.
no code implementations • 6 Feb 2021 • David Leslie
In this paper I explore the scaffolding of normative assumptions that supports Sabina Leonelli's implicit appeal to the values of epistemic integrity and the global public good that conjointly animate the ethos of responsible and sustainable data work in the context of COVID-19.
no code implementations • 5 Oct 2020 • David Leslie
Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point.
1 code implementation • 25 Sep 2020 • Thomas Pinder, Christopher Nemeth, David Leslie
We show how to use Stein variational gradient descent (SVGD) to carry out inference in Gaussian process (GP) models with non-Gaussian likelihoods and large data volumes.
no code implementations • 15 Aug 2020 • David Leslie
Innovations in data science and AI/ML have a central role to play in supporting global efforts to combat COVID-19.
no code implementations • 11 Jun 2019 • David Leslie
A remarkable time of human promise has been ushered in by the convergence of the ever-expanding availability of big data, the soaring speed and stretch of cloud computing platforms, and the advancement of increasingly sophisticated machine learning algorithms.
Cloud Computing Cultural Vocal Bursts Intensity Prediction +2
no code implementations • NeurIPS 2018 • Mario Bravo, David Leslie, Panayotis Mertikopoulos
This paper examines the long-run behavior of learning with bandit feedback in non-cooperative concave games.
1 code implementation • COLING 2018 • Henry Moss, David Leslie, Paul Rayson
K-fold cross validation (CV) is a popular method for estimating the true performance of machine learning models, allowing model selection and parameter tuning.