In spite of widely discussed, drawing order recovery (DOR) from static images is still a great challenge task. Based on the idea that drawing trajectories are able to be recovered by connecting their trajectory components in correct orders, this work proposes a novel DOR method from static images. The method contains two steps: firstly, we adopt a convolution neural network (CNN) to predict the next possible drawing components, which is able to covert the components in images to their reasonable sequences. We denote this architecture as Im2Seq-CNN; secondly, considering possible errors exist in the reasonable sequences generated by the first step, we construct a sequence to order structure (Seq2Order) to adjust the sequences to the correct orders. The main contributions include: (1) the Img2Seq-CNN step considers DOR from components instead of traditional pixels one by one along trajectories, which contributes to static images to component sequences; (2) the Seq2Order step adopts image position codes instead of traditional points’ coordinates in its encoder-decoder gated recurrent neural network (GRU-RNN). The proposed method is experienced on two well-known open handwriting databases, and yields robust and competitive results on handwriting DOR tasks compared to the state-of-arts.

PDF
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods