Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments.
AI Safety researchers attempting to align values of highly capable intelligent systems with those of humanity face a number of challenges including personal value extraction, multi-agent value merger and finally in-silico encoding.
In this paper, we review the state-of-the-art results in evolutionary computation and observe that we do not evolve non trivial software from scratch and with no human intervention.
It is possible to rely on current corporate law to grant legal personhood to Artificially Intelligent (AI) agents.
The complexity of dynamics in AI techniques is already approaching that of complex adaptive systems, thus curtailing the feasibility of formal controllability and reachability analysis in the context of AI safety.
With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community.
Diversity is one of the fundamental properties for the survival of species, populations, and organizations.
Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure.
Software capable of improving itself has been a dream of computer scientists since the inception of the field.