Search Results for author: Tripti Shukla

Found 5 papers, 1 papers with code

Design-o-meter: Towards Evaluating and Refining Graphic Designs

no code implementations22 Nov 2024 Sahil Goyal, Abhinav Mahajan, Swasti Mishra, Prateksha Udhayanan, Tripti Shukla, K J Joseph, Balaji Vasan Srinivasan

To the best of our knowledge, Design-o-meter is the first approach that scores and refines designs in a unified framework despite the inherent subjectivity and ambiguity of the setting.

Test-time Conditional Text-to-Image Synthesis Using Diffusion Models

no code implementations16 Nov 2024 Tripti Shukla, Srikrishna Karanam, Balaji Vasan Srinivasan

To address this gap in the current literature, we propose our method called TINTIN: Test-time Conditional Text-to-Image Synthesis using Diffusion Models which is a new training-free test-time only algorithm to condition text-to-image diffusion model outputs on conditioning factors such as color palettes and edge maps.

Conditional Text-to-Image Synthesis Denoising +1

An Image is Worth Multiple Words: Multi-attribute Inversion for Constrained Text-to-Image Synthesis

no code implementations20 Nov 2023 Aishwarya Agarwal, Srikrishna Karanam, Tripti Shukla, Balaji Vasan Srinivasan

Another line of techniques expand the inversion space to learn multiple embeddings but they do this only along the layer dimension (e. g., one per layer of the DDPM model) or the timestep dimension (one for a set of timesteps in the denoising process), leading to suboptimal attribute disentanglement.

Attribute Denoising +2

SALAD: Source-free Active Label-Agnostic Domain Adaptation for Classification, Segmentation and Detection

1 code implementation24 May 2022 Divya Kothandaraman, Sumit Shekhar, Abhilasha Sancheti, Manoj Ghuhan, Tripti Shukla, Dinesh Manocha

SALAD has three key benefits: (i) it is task-agnostic, and can be applied across various visual tasks such as classification, segmentation and detection; (ii) it can handle shifts in output label space from the pre-trained source network to the target domain; (iii) it does not require access to source data for adaptation.

Active Learning Domain Adaptation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.