A Protection against the Extraction of Neural Network Models

26 May 2020Hervé ChabanneLinda Guiga

Given oracle access to a Neural Network (NN), it is possible to extract its underlying model. We here introduce a protection by adding parasitic layers which mostly keep unchanged the underlying NN while complexifying the task of reverse-engineering... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.