Search Results for author: Dhruba Ghosh

Found 5 papers, 4 papers with code

Getting it Right: Improving Spatial Consistency in Text-to-Image Models

1 code implementation1 Apr 2024 Agneet Chatterjee, Gabriela Ben Melech Stan, Estelle Aflalo, Sayak Paul, Dhruba Ghosh, Tejas Gokhale, Ludwig Schmidt, Hannaneh Hajishirzi, Vasudev Lal, Chitta Baral, Yezhou Yang

One of the key shortcomings in current text-to-image (T2I) models is their inability to consistently generate images which faithfully follow the spatial relationships specified in the text prompt.

GenEval: An Object-Focused Framework for Evaluating Text-to-Image Alignment

1 code implementation NeurIPS 2023 Dhruba Ghosh, Hanna Hajishirzi, Ludwig Schmidt

Recent breakthroughs in diffusion models, multimodal pretraining, and efficient finetuning have led to an explosion of text-to-image generative models.

Attribute Object +2

The Effect of Model Size on Worst-Group Generalization

no code implementations8 Dec 2021 Alan Pham, Eunice Chan, Vikranth Srivatsa, Dhruba Ghosh, Yaoqing Yang, Yaodong Yu, Ruiqi Zhong, Joseph E. Gonzalez, Jacob Steinhardt

Overparameterization is shown to result in poor test accuracy on rare subgroups under a variety of settings where subgroup information is known.

Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level

1 code implementation Findings (ACL) 2021 Ruiqi Zhong, Dhruba Ghosh, Dan Klein, Jacob Steinhardt

We develop statistically rigorous methods to address this, and after accounting for pretraining and finetuning noise, we find that our BERT-Large is worse than BERT-Mini on at least 1-4% of instances across MNLI, SST-2, and QQP, compared to the overall accuracy improvement of 2-10%.

QQP SST-2

Cannot find the paper you are looking for? You can Submit a new open access paper.