Formal Methods for the Informal Engineer (FMIE) was a workshop held at the Broad Institute of MIT and Harvard in 2021 to explore the potential role of verified software in the biomedical software ecosystem.
Standard Markov Decision Process (MDP) formulations of RL and simulated environments mirroring the MDP structure assume secure access to feedback (e. g., rewards).
How can we design agents that pursue a given objective when all feedback mechanisms are influenceable by the agent?
Formal verification of machine learning models has attracted attention recently, and significant progress has been made on proving simple properties like robustness to small perturbations of the input features.
Can humans get arbitrarily capable reinforcement learning (RL) agents to do their bidding?
Proposals for safe AGI systems are typically made at the level of frameworks, specifying how the components of the proposed system should be trained and interact with each other.
How can we design safe reinforcement learning agents that avoid unnecessary disruptions to their environment?