Logically Synthesized, Hardware-Accelerated, Restricted Boltzmann Machines for Combinatorial Optimization and Integer Factorization

16 Jun 2020  ·  Saavan Patel, Philip Canoza, Sayeef Salahuddin ·

The Restricted Boltzmann Machine (RBM) is a stochastic neural network capable of solving a variety of difficult tasks such as NP-Hard combinatorial optimization problems and integer factorization. The RBM architecture is also very compact; requiring very few weights and biases. This, along with its simple, parallelizable sampling algorithm for finding the ground state of such problems, makes the RBM amenable to hardware acceleration. However, training of the RBM on these problems can pose a significant challenge, as the training algorithm tends to fail for large problem sizes and efficient mappings can be hard to find. Here, we propose a method of combining RBMs together that avoids the need to train large problems in their full form. We also propose methods for making the RBM more hardware amenable, allowing the algorithm to be efficiently mapped to an FPGA-based accelerator. Using this accelerator, we are able to show hardware accelerated factorization of 16 bit numbers with high accuracy with a speed improvement of 10000x and a power improvement of 32x.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods