Search Results for author: Kyle Otstot

Found 3 papers, 0 papers with code

Addressing GAN Training Instabilities via Tunable Classification Losses

no code implementations27 Oct 2023 Monica Welfert, Gowtham R. Kurri, Kyle Otstot, Lalitha Sankar

Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error.

Classification

$(α_D,α_G)$-GANs: Addressing GAN Training Instabilities via Dual Objectives

no code implementations28 Feb 2023 Monica Welfert, Kyle Otstot, Gowtham R. Kurri, Lalitha Sankar

In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D).

AugLoss: A Robust Augmentation-based Fine Tuning Methodology

no code implementations5 Jun 2022 Kyle Otstot, Andrew Yang, John Kevin Cava, Lalitha Sankar

As a step towards addressing both problems simultaneously, we introduce AugLoss, a simple but effective methodology that achieves robustness against both train-time noisy labeling and test-time feature distribution shifts by unifying data augmentation and robust loss functions.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.