Using Structured Input and Modularity for Improved Learning

29 Mar 2019  ·  Zehra Sura, Tong Chen, Hyojin Sung ·

We describe a method for utilizing the known structure of input data to make learning more efficient. Our work is in the domain of programming languages, and we use deep neural networks to do program analysis. Computer programs include a lot of structural information (such as loop nests, conditional blocks, and data scopes), which is pertinent to program analysis. In this case, the neural network has to learn to recognize the structure, and also learn the target function for the problem. However, the structural information in this domain is readily accessible to software with the availability of compiler tools and parsers for well-defined programming languages. Our method for utilizing the known structure of input data includes: (1) pre-processing the input data to expose relevant structures, and (2) constructing neural networks by incorporating the structure of the input data as an integral part of the network design. The method has the effect of modularizing the neural network which helps break down complexity, and results in more efficient training of the overall network. We apply this method to an example code analysis problem, and show that it can achieve higher accuracy with a smaller network size and fewer training examples. Further, the method is robust, performing equally well on input data with different distributions.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here