Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims

15 Apr 2020Miles BrundageShahar AvinJasmine WangHaydn BelfieldGretchen KruegerGillian HadfieldHeidy KhlaafJingying YangHelen TonerRuth FongTegan MaharajPang Wei KohSara HookerJade LeungAndrew TraskEmma BluemkeJonathan LebensboldCullen O'KeefeMark KorenThéo RyffelJB RubinovitzTamay BesirogluFederica CarugatiJack ClarkPeter EckersleySarah de HaasMaritza JohnsonBen LaurieAlex IngermanIgor KrawczukAmanda AskellRosario CammarotaAndrew LohnDavid KruegerCharlotte StixPeter HendersonLogan GrahamCarina PrunklBianca MartinElizabeth SegerNoa ZilbermanSeán Ó hÉigeartaighFrens KroegerGirish SastryRebecca KaganAdrian WellerBrian TseElizabeth BarnesAllan DafoePaul ScharreAriel Herbert-VossMartijn RasserShagun SodhaniCarrick FlynnThomas Krendl GilbertLisa DyerSaif KhanYoshua BengioMarkus Anderljung

With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet