LUMA (Learning from Uncertain and Multimodal Data)

Introduced by Bezirganyan et al. in LUMA: A Benchmark Dataset for Learning from Uncertain and Multimodal Data

LUMA is a multimodal dataset that consists of audio, image, and text modalities. It allows controlled injection of uncertainties into the data and is mainly intended for studying uncertainty quantification in multimodal classification settings. This repository provides the Audio and Text modalities. The image modality consists of images from CIFAR-10/100 datasets. To download the image modality and compile the dataset with a specified amount of uncertainties, please use the LUMA compilation tool.

Papers


Paper Code Results Date Stars

Dataset Loaders


Tasks


License


Modalities


Languages