Search Results for author: Alberto Baldrati

Found 11 papers, 9 papers with code

LaDI-VTON: Latent Diffusion Textual-Inversion Enhanced Virtual Try-On

1 code implementation22 May 2023 Davide Morelli, Alberto Baldrati, Giuseppe Cartella, Marcella Cornia, Marco Bertini, Rita Cucchiara

In this context, image-based virtual try-on, which consists in generating a novel image of a target model wearing a given in-shop garment, has yet to capitalize on the potential of these powerful generative solutions.

Virtual Try-on

Multimodal Garment Designer: Human-Centric Latent Diffusion Models for Fashion Image Editing

1 code implementation ICCV 2023 Alberto Baldrati, Davide Morelli, Giuseppe Cartella, Marcella Cornia, Marco Bertini, Rita Cucchiara

Given the lack of existing datasets suitable for the task, we also extend two existing fashion datasets, namely Dress Code and VITON-HD, with multimodal annotations collected in a semi-automatic manner.

Multimodal fashion image editing

Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based Features

1 code implementation22 Aug 2023 Alberto Baldrati, Marco Bertini, Tiberio Uricchio, Alberto del Bimbo

Given a query composed of a reference image and a relative caption, the Composed Image Retrieval goal is to retrieve images visually similar to the reference one that integrates the modifications expressed by the caption.

Contrastive Learning Image Retrieval +1

Zero-Shot Composed Image Retrieval with Textual Inversion

3 code implementations ICCV 2023 Alberto Baldrati, Lorenzo Agnolucci, Marco Bertini, Alberto del Bimbo

Composed Image Retrieval (CIR) aims to retrieve a target image based on a query composed of a reference image and a relative caption that describes the difference between the two images.

Retrieval Zero-Shot Composed Image Retrieval (ZS-CIR)

OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion Data

1 code implementation11 Sep 2023 Giuseppe Cartella, Alberto Baldrati, Davide Morelli, Marcella Cornia, Marco Bertini, Rita Cucchiara

The inexorable growth of online shopping and e-commerce demands scalable and robust machine learning-based solutions to accommodate customer requirements.

Contrastive Learning Domain Generalization +2

Mapping Memes to Words for Multimodal Hateful Meme Classification

1 code implementation12 Oct 2023 Giovanni Burbi, Alberto Baldrati, Lorenzo Agnolucci, Marco Bertini, Alberto del Bimbo

Multimodal image-text memes are prevalent on the internet, serving as a unique form of communication that combines visual and textual elements to convey humor, ideas, or emotions.

Hateful Meme Classification Language Modelling

Multimodal-Conditioned Latent Diffusion Models for Fashion Image Editing

1 code implementation21 Mar 2024 Alberto Baldrati, Davide Morelli, Marcella Cornia, Marco Bertini, Rita Cucchiara

Fashion illustration is a crucial medium for designers to convey their creative vision and transform design concepts into tangible representations that showcase the interplay between clothing and the human body.

Denoising Virtual Try-on

ECO: Ensembling Context Optimization for Vision-Language Models

no code implementations26 Jul 2023 Lorenzo Agnolucci, Alberto Baldrati, Francesco Todino, Federico Becattini, Marco Bertini, Alberto del Bimbo

Among these, the CLIP model has shown remarkable capabilities for zero-shot transfer by matching an image and a custom textual prompt in its latent space.

Classification Image Classification

Exploiting CLIP-based Multi-modal Approach for Artwork Classification and Retrieval

no code implementations21 Sep 2023 Alberto Baldrati, Marco Bertini, Tiberio Uricchio, Alberto del Bimbo

Given the recent advances in multimodal image pretraining where visual models trained with semantically dense textual supervision tend to have better generalization capabilities than those trained using categorical attributes or through unsupervised techniques, in this work we investigate how recent CLIP model can be applied in several tasks in artwork domain.

Retrieval Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.