Parameter Sharing Decoder Pair for Auto Composing

31 Oct 2019  ·  Xu Zhao ·

Auto Composing is an active and appealing research area in the past few years, and lots of efforts have been put into inventing more robust models to solve this problem. With the fast evolution of deep learning techniques, some deep neural network-based language models are becoming dominant. Notably, the transformer structure has been proven to be very efficient and promising in modeling texts. However, the transformer-based language models usually contain huge number of parameters and the size of the model is usually too large to put in production for some storage limited applications. In this paper, we propose a parameter sharing decoder pair (PSDP), which reduces the number of parameters dramatically and at the same time maintains the capability of generating understandable and reasonable compositions. Works created by the proposed model are presented to demonstrate the effectiveness of the model.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods