Multimodal Deep Learning
66 papers with code • 1 benchmarks • 17 datasets
Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.
One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.
Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.
Datasets
Most implemented papers
Jointly Fine-Tuning “BERT-like” Self Supervised Models to Improve Multimodal Speech Emotion Recognition
Multimodal emotion recognition from speech is an important area in affective computing.
MMEA: Entity Alignment for Multi-Modal Knowledge Graphs
To that end, in this paper, we propose a novel solution called Multi-Modal Entity Alignment (MMEA) to address the problem of entity alignment in a multi-modal view.
Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and Report Dictation for AI Development
We report deep learning experiments that utilize the attention maps produced by eye gaze dataset to show the potential utility of this data.
Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion
Emotion Recognition is a challenging research area given its complex nature, and humans express emotional cues across various modalities such as language, facial expressions, and speech.
Multimodal Learning for Hateful Memes Detection
Memes are used for spreading ideas through social networks.
Detecting Video Game Player Burnout with the Use of Sensor Data and Machine Learning
In this article, we propose the methods based on the sensor data analysis for predicting whether a player will win the future encounter.
Image and Text fusion for UPMC Food-101 \\using BERT and CNNs
The modern digital world is becoming more and more multimodal.
Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge
Memes on the Internet are often harmless and sometimes amusing.
Piano Skills Assessment
Can a computer determine a piano player's skill level?
Deep Learning for Android Malware Defenses: a Systematic Literature Review
In this paper, we conducted a systematic literature review to search and analyze how deep learning approaches have been applied in the context of malware defenses in the Android environment.