Multimodal Deep Learning
92 papers with code • 1 benchmarks • 21 datasets
Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.
One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.
Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.
Datasets
Most implemented papers
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model.
LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
We thus propose VIDAL-10M with Video, Infrared, Depth, Audio and their corresponding Language, naming as VIDAL-10M.
ShapeWorld - A new test methodology for multimodal language understanding
We introduce a novel framework for evaluating multimodal deep learning models with respect to their language understanding and generalization abilities.
Multimodal deep networks for text and image-based document classification
Classification of document images is a critical step for archival of old manuscripts, online subscription and administrative procedures.
Are These Birds Similar: Learning Branched Networks for Fine-grained Representations
In recent years, natural language descriptions are used to obtain information on discriminative parts of the object.
Supervised Video Summarization via Multiple Feature Sets with Parallel Attention
The proposed architecture utilizes an attention mechanism before fusing motion features and features representing the (static) visual content, i. e., derived from an image classification model.
aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception
The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view.
ImageBind: One Embedding Space To Bind Them All
We show that all combinations of paired data are not necessary to train such a joint embedding, and only image-paired data is sufficient to bind the modalities together.
Multimodal Deep Learning for Robust RGB-D Object Recognition
Robust object recognition is a crucial ingredient of many, if not all, real-world robotics applications.
A multimodal deep learning framework for scalable content based visual media retrieval
We propose a novel, efficient, modular and scalable framework for content based visual media retrieval systems by leveraging the power of Deep Learning which is flexible to work both for images and videos conjointly and we also introduce an efficient comparison and filtering metric for retrieval.