1 code implementation • 2 Nov 2022 • Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, Ming-Yu Liu
Therefore, in contrast to existing works, we propose to train an ensemble of text-to-image diffusion models specialized for different synthesis stages.
Ranked #9 on
Text-to-Image Generation
on COCO
no code implementations • CVPR 2022 • Mazda Moayeri, Phillip Pope, Yogesh Balaji, Soheil Feizi
While datasets with single-label supervision have propelled rapid advances in image classification, additional annotations are necessary in order to quantitatively assess how models make predictions.
no code implementations • 29 Sep 2021 • Neha Mukund Kalibhat, Yogesh Balaji, C. Bayan Bruss, Soheil Feizi
In fact, training these methods on a combination of several domains often degrades the quality of learned representations compared to the models trained on a single domain.
no code implementations • 12 Apr 2021 • Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi
We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets.
no code implementations • 1 Jan 2021 • Bingyuan Liu, Yogesh Balaji, Lingzhou Xue, Martin Renqiang Min
Attention mechanisms have advanced state-of-the-art deep learning models in many machine learning tasks.
no code implementations • ICLR 2021 • Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi
In this work, we present a comprehensive analysis of the importance of model over-parameterization in GANs both theoretically and empirically.
2 code implementations • NeurIPS 2020 • Yogesh Balaji, Rama Chellappa, Soheil Feizi
To remedy this issue, robust formulations of OT with unbalanced marginal constraints have previously been proposed.
no code implementations • 6 Oct 2020 • Yogesh Balaji, Mehrdad Farajtabar, Dong Yin, Alex Mott, Ang Li
However, a degraded performance is observed for ER with small memory.
1 code implementation • 5 Oct 2020 • Neha Mukund Kalibhat, Yogesh Balaji, Soheil Feizi
In this paper, we confirm the existence of winning tickets in deep generative models such as GANs and VAEs.
1 code implementation • ECCV 2020 • Prithvijit Chattopadhyay, Yogesh Balaji, Judy Hoffman
For domain generalization, the goal is to learn from a set of source domains to produce a single model that will best generalize to an unseen target domain.
Ranked #17 on
Domain Generalization
on DomainNet
no code implementations • ECCV 2020 • Luyu Yang, Yogesh Balaji, Ser-Nam Lim, Abhinav Shrivastava
In this paper, we proposed an adversarial agent that learns a dynamic curriculum for source samples, called Curriculum Manager for Source Selection (CMSS).
Multi-Source Unsupervised Domain Adaptation
Unsupervised Domain Adaptation
1 code implementation • 24 Mar 2020 • Gowthami Somepalli, Yexin Wu, Yogesh Balaji, Bhanukiran Vinzamuri, Soheil Feizi
Detecting out of distribution (OOD) samples is of paramount importance in all Machine Learning applications.
Out of Distribution (OOD) Detection
Representation Learning
+1
1 code implementation • 23 Nov 2019 • Wei-An Lin, Yogesh Balaji, Pouya Samangouei, Rama Chellappa
Additionally, we show how InvGAN can be used to implement reparameterization white-box attacks on projection-based defense mechanisms.
no code implementations • 20 Nov 2019 • Phillip Pope, Yogesh Balaji, Soheil Feizi
Finally, using a hybrid adversarial training procedure, we significantly boost the robustness of these generative models.
1 code implementation • 17 Oct 2019 • Yogesh Balaji, Tom Goldstein, Judy Hoffman
Adversarial training is by far the most successful strategy for improving robustness of neural networks to adversarial attacks.
no code implementations • ICCV 2019 • Yogesh Balaji, Rama Chellappa, Soheil Feizi
Using the proposed normalized Wasserstein measure leads to significant performance gains for mixture distributions with imbalanced mixture proportions compared to the vanilla Wasserstein distance.
no code implementations • 25 Sep 2019 • Bingyuan Liu, Yogesh Balaji, Lingzhou Xue, Martin Renqiang Min
Attention mechanisms have advanced the state of the art in several machine learning tasks.
1 code implementation • 1 Feb 2019 • Yogesh Balaji, Rama Chellappa, Soheil Feizi
Using the proposed normalized Wasserstein measure leads to significant performance gains for mixture distributions with imbalanced mixture proportions compared to the vanilla Wasserstein distance.
no code implementations • NeurIPS 2018 • Yogesh Balaji, Swami Sankaranarayanan, Rama Chellappa
Training models that generalize to new domains at test time is a problem of fundamental importance in machine learning.
Ranked #44 on
Domain Generalization
on PACS
1 code implementation • ICLR 2019 • Yogesh Balaji, Hamed Hassani, Rama Chellappa, Soheil Feizi
Building on the success of deep learning, two modern approaches to learn a probability model from the data are Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs).
no code implementations • CVPR 2018 • Swami Sankaranarayanan, Yogesh Balaji, Arpit Jain, Ser Nam Lim, Rama Chellappa
In this work, we focus on adapting the representations learned by segmentation networks across synthetic and real domains.
no code implementations • CVPR 2017 • Vijay Rengarajan, Yogesh Balaji, A. N. Rajagopalan
Our single-image correction method fares well even operating in a frame-by-frame manner against video-based methods and performs better than scene-specific correction schemes even under challenging situations.
1 code implementation • CVPR 2018 • Swami Sankaranarayanan, Yogesh Balaji, Carlos D. Castillo, Rama Chellappa
Domain Adaptation is an actively researched problem in Computer Vision.
Ranked #20 on
Domain Adaptation
on Office-31