Neural Network Verification for the Masses (of AI graduates)

Rapid development of AI applications has stimulated demand for, and has given rise to, the rapidly growing number and diversity of AI MSc degrees. AI and Robotics research communities, industries and students are becoming increasingly aware of the problems caused by unsafe or insecure AI applications. Among them, perhaps the most famous example is vulnerability of deep neural networks to ``adversarial attacks''. Owing to wide-spread use of neural networks in all areas of AI, this problem is seen as particularly acute and pervasive. Despite of the growing number of research papers about safety and security vulnerabilities of AI applications, there is a noticeable shortage of accessible tools, methods and teaching materials for incorporating verification into AI programs. LAIV -- the Lab for AI and Verification -- is a newly opened research lab at Heriot-Watt university that engages AI and Robotics MSc students in verification projects, as part of their MSc dissertation work. In this paper, we will report on successes and unexpected difficulties LAIV faces, many of which arise from limitations of existing programming languages used for verification. We will discuss future directions for incorporating verification into AI degrees.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here