Search Results for author: Nora Ammann

Found 1 papers, 0 papers with code

Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems

no code implementations10 May 2024 David "davidad" Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, Joshua Tenenbaum

Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts.

Cannot find the paper you are looking for? You can Submit a new open access paper.