multimodal generation
33 papers with code • 1 benchmarks • 4 datasets
Multimodal generation refers to the process of generating outputs that incorporate multiple modalities, such as images, text, and sound. This can be done using deep learning models that are trained on data that includes multiple modalities, allowing the models to generate output that is informed by more than one type of data.
For example, a multimodal generation model could be trained to generate captions for images that incorporate both text and visual information. The model could learn to identify objects in the image and generate descriptions of them in natural language, while also taking into account contextual information and the relationships between the objects in the image.
Multimodal generation can also be used in other applications, such as generating realistic images from textual descriptions or generating audio descriptions of video content. By combining multiple modalities in this way, multimodal generation models can produce more accurate and comprehensive output, making them useful for a wide range of applications.
Most implemented papers
Finite Scalar Quantization: VQ-VAE Made Simple
Each dimension is quantized to a small set of fixed values, leading to an (implicit) codebook given by the product of these sets.
Retrieval-Augmented Generation for AI-Generated Content: A Survey
We first classify RAG foundations according to how the retriever augments the generator, distilling the fundamental abstractions of the augmentation methodologies for various retrievers and generators.
PMG : Personalized Multimodal Generation with Large Language Models
Such user preferences are then fed into a generator, such as a multimodal LLM or diffusion model, to produce personalized content.
GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!)
This adversarial loss guarantees the map is diverse -- a very wide range of anime can be produced from a single content code.
Grounding Language Models to Images for Multimodal Inputs and Outputs
We propose an efficient method to ground pretrained text-only language models to the visual domain, enabling them to process arbitrarily interleaved image-and-text data, and generate text interleaved with retrieved images.
Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image Re-creation
This stream is subsequently fed into the decoder-based transformer to generate visual re-creations and textual feedback in the second stage.
Continual and Multi-Task Architecture Search
Architecture search is the process of automatically learning the neural model or cell structure that best suits the given task.
Unconditional Image-Text Pair Generation with Multimodal Cross Quantizer
To learn a multimodal semantic correlation in a quantized space, we combine VQ-VAE with a Transformer encoder and apply an input masking strategy.
Multimodal Generation of Novel Action Appearances for Synthetic-to-Real Recognition of Activities of Daily Living
We tackle this challenge and introduce an activity domain generation framework which creates novel ADL appearances (novel domains) from different existing activity modalities (source domains) inferred from video training data.
Multimedia Generative Script Learning for Task Planning
Goal-oriented generative script learning aims to generate subsequent steps to reach a particular goal, which is an essential task to assist robots or humans in performing stereotypical activities.