Multimodal Deep Learning

67 papers with code • 1 benchmarks • 17 datasets

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Latest papers with no code

Research on Image Recognition Technology Based on Multimodal Deep Learning

no code yet • 6 May 2024

This project investigates the human multi-modal behavior identification algorithm utilizing deep neural networks.

Feature importance to explain multimodal prediction models. A clinical use case

no code yet • 29 Apr 2024

In this work, we develop a multimodal deep-learning model for post-operative mortality prediction using pre-operative and per-operative data from elderly hip fracture patients.

Integrating Wearable Sensor Data and Self-reported Diaries for Personalized Affect Forecasting

no code yet • 16 Mar 2024

Emotional states, as indicators of affect, are pivotal to overall health, making their accurate prediction before onset crucial.

A Multimodal Intermediate Fusion Network with Manifold Learning for Stress Detection

no code yet • 12 Mar 2024

We compared various dimensionality reduction techniques for different variations of unimodal and multimodal networks.

Multimodal deep learning approach to predicting neurological recovery from coma after cardiac arrest

no code yet • 9 Mar 2024

This work showcases our team's (The BEEGees) contributions to the 2023 George B. Moody PhysioNet Challenge.

Multimodal Learning To Improve Cardiac Late Mechanical Activation Detection From Cine MR Images

no code yet • 28 Feb 2024

This paper presents a multimodal deep learning framework that utilizes advanced image techniques to improve the performance of clinical analysis heavily dependent on routinely acquired standard images.

Multimodal Deep Learning of Word-of-Mouth Text and Demographics to Predict Customer Rating: Handling Consumer Heterogeneity in Marketing

no code yet • 22 Jan 2024

However, a number of consumers today usually post their evaluation on the specific product on the online platform, which can be the valuable source of such unobservable differences among consumers.

Multimodal Urban Areas of Interest Generation via Remote Sensing Imagery and Geographical Prior

no code yet • 12 Jan 2024

Unlike conventional AOI generation methods, such as the Road-cut method that segments road networks at various levels, our approach diverges from semantic segmentation algorithms that depend on pixel-level classification.

Predicting the Skies: A Novel Model for Flight-Level Passenger Traffic Forecasting

no code yet • 7 Jan 2024

This study introduces a novel, multimodal deep learning approach to the challenge of predicting flight-level passenger traffic, yielding substantial accuracy improvements compared to traditional models.

Multimodal self-supervised learning for lesion localization

no code yet • 3 Jan 2024

Multimodal deep learning utilizing imaging and diagnostic reports has made impressive progress in the field of medical imaging diagnostics, demonstrating a particularly strong capability for auxiliary diagnosis in cases where sufficient annotation information is lacking.