Multimodal Lecture Presentations (MLP) is a large-scale benchmark dataset for testing the capabilities of machine learning models in multimodal understanding of educational content. To benchmark the understanding of multimodal information in lecture slides, two research tasks are introduced; they are designed to be a first step towards developing AI that can explain and illustrate lecture slides: automatic retrieval of (1) spoken explanations for an educational figure (Figure-to-Text) and (2) illustrations to accompany a spoken explanation (Text-to-Figure).
Source: Multimodal Lecture Presentations (MLP) DatasetPaper | Code | Results | Date | Stars |
---|