Increasingly sophisticated mathematical modelling processes from Machine Learning are being used to analyse complex data.
A computational system is called autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control.
The battery is a key component of autonomous robots.
Robots are increasingly used to carry out critical missions in extreme environments that are hazardous for humans.
The spread of autonomous systems into safety-critical areas has increased the demand for their formal verification, not only due to stronger certification requirements but also to public uncertainty over these new technologies.
We examine implemented systems for ethical machine reasoning with a view to identifying the practical challenges (as opposed to philosophical challenges) posed by the area.
Both the ethics of autonomous systems and the problems of their technical implementation have by now been studied in some detail.
However, as the formal verification technique used to verify the agent code does not scale to the full system and as the global verification technique does not capture the essential verification of autonomous behavior, we use a combination of the two approaches.