From Paper to Machine: Extracting Strokes from Images for use in Sketch Recognition

Sketching is a way of conveying ideas to people of diverse backgrounds and culture without any linguistic medium. With the advent of inexpensive tablet PCs, online sketches have become more common, allowing for stroke-based sketch recognition techniques, more powerful editing techniques, and automatic simulation of recognized diagrams. Online sketches provide significantly more information than paper sketches, but they still do not provide the flexibility, naturalness, and simplicity of a simple piece of paper. Recognition methods exist for paper sketches, but they tend to be domain specific and don’t benefit from the advances of stroke-based sketch recognition. Our goal is to combine the power of stroke-based sketch recognition with the flexibility and ease of use of a piece of paper. In this paper we will present a stroke-tracing algorithm that can be used to extract stroke data from the pixilated image of the sketch drawn on paper. The presented method handles overlapping strokes and also attempts to capture sequencing information, which is helpful in many sketch recognition techniques. We present preliminary results of our algorithm on several paper-drawn, hand-sketched, scanned-in pixilated images.



  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here