Text to Multi-level MindMaps: A Novel Method for Hierarchical Visual Abstraction of Natural Language Text

1 Aug 2014  ·  Mohamed Elhoseiny, Ahmed Elgammal ·

MindMapping is a well-known technique used in note taking, which encourages learning and studying. MindMapping has been manually adopted to help present knowledge and concepts in a visual form. Unfortunately, there is no reliable automated approach to generate MindMaps from Natural Language text. This work firstly introduces MindMap Multilevel Visualization concept which is to jointly visualize and summarize textual information. The visualization is achieved pictorially across multiple levels using semantic information (i.e. ontology), while the summarization is achieved by the information in the highest levels as they represent abstract information in the text. This work also presents the first automated approach that takes a text input and generates a MindMap visualization out of it. The approach could visualize text documents in multilevel MindMaps, in which a high-level MindMap node could be expanded into child MindMaps. \ignore{ As far as we know, this is the first work that view MindMapping as a new approach to jointly summarize and visualize textual information.} The proposed method involves understanding of the input text and converting it into intermediate Detailed Meaning Representation (DMR). The DMR is then visualized with two modes; Single level or Multiple levels, which is convenient for larger text. The generated MindMaps from both approaches were evaluated based on Human Subject experiments performed on Amazon Mechanical Turk with various parameter settings.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here