Search Results for author: Hardik Shah

Found 9 papers, 2 papers with code

Text-to-Sticker: Style Tailoring Latent Diffusion Models for Human Expression

no code implementations17 Nov 2023 Animesh Sinha, Bo Sun, Anmol Kalia, Arantxa Casanova, Elliot Blanchard, David Yan, Winnie Zhang, Tony Nelli, Jiahui Chen, Hardik Shah, Licheng Yu, Mitesh Kumar Singh, Ankit Ramchandani, Maziar Sanjabi, Sonal Gupta, Amy Bearman, Dhruv Mahajan

Evaluation results show our method improves visual quality by 14%, prompt alignment by 16. 2% and scene diversity by 15. 3%, compared to prompt engineering the base Emu model for stickers generation.

Image Generation Prompt Engineering

End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates

no code implementations9 Jun 2023 Anshul Nasery, Hardik Shah, Arun Sai Suggala, Prateek Jain

Our algorithm is versatile and can be used with many popular compression methods including pruning, low-rank factorization, and quantization.

Neural Architecture Search Neural Network Compression +2

DIME-FM: DIstilling Multimodal and Efficient Foundation Models

no code implementations31 Mar 2023 Ximeng Sun, Pengchuan Zhang, Peizhao Zhang, Hardik Shah, Kate Saenko, Xide Xia

We transfer the knowledge from the pre-trained CLIP-ViTL/14 model to a ViT-B/32 model, with only 40M public images and 28. 4M unpaired public sentences.

Image Classification

DIME-FM : DIstilling Multimodal and Efficient Foundation Models

no code implementations ICCV 2023 Ximeng Sun, Pengchuan Zhang, Peizhao Zhang, Hardik Shah, Kate Saenko, Xide Xia

In this paper, we introduce a new distillation mechanism (DIME-FM) that allows us to transfer the knowledge contained in large VLFMs to smaller, customized foundation models using a relatively small amount of inexpensive, unpaired images and sentences.

Image Classification

Tell Your Story: Task-Oriented Dialogs for Interactive Content Creation

no code implementations8 Nov 2022 Satwik Kottur, Seungwhan Moon, Aram H. Markosyan, Hardik Shah, Babak Damavandi, Alborz Geramifard

We collect a new dataset C3 (Conversational Content Creation), comprising 10k dialogs conditioned on media montages simulated from a large media collection.

Benchmarking Retrieval

VoLTA: Vision-Language Transformer with Weakly-Supervised Local-Feature Alignment

1 code implementation9 Oct 2022 Shraman Pramanick, Li Jing, Sayan Nag, Jiachen Zhu, Hardik Shah, Yann Lecun, Rama Chellappa

Extensive experiments on a wide range of vision- and vision-language downstream tasks demonstrate the effectiveness of VoLTA on fine-grained applications without compromising the coarse-grained downstream performance, often outperforming methods using significantly more caption and box annotations.

object-detection Object Detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.