no code implementations • 17 Feb 2023 • Tyler Sypherd, Nathan Stromberg, Richard Nock, Visar Berisha, Lalitha Sankar
There is a growing need for models that are interpretable and have reduced energy and computational cost (e. g., in health care analytics and federated learning).
no code implementations • 12 May 2022 • Gowtham R. Kurri, Monica Welfert, Tyler Sypherd, Lalitha Sankar
We prove a two-way correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated $f$-divergences.
no code implementations • 18 Jun 2021 • Tyler Sypherd, Richard Nock, Lalitha Sankar
Hence, optimizing a proper loss function on twisted data could perilously lead the learning algorithm towards the twisted posterior, rather than to the desired clean posterior.
no code implementations • 9 Jun 2021 • Gowtham R. Kurri, Tyler Sypherd, Lalitha Sankar
We introduce a tunable GAN, called $\alpha$-GAN, parameterized by $\alpha \in (0,\infty]$, which interpolates between various $f$-GANs and Integral Probability Metric based GANs (under constrained discriminator set).
no code implementations • 22 Jun 2020 • Tyler Sypherd, Mario Diaz, Lalitha Sankar, Gautam Dasarathy
We analyze the optimization landscape of a recently introduced tunable class of loss functions called $\alpha$-loss, $\alpha \in (0,\infty]$, in the logistic model.
1 code implementation • 5 Jun 2019 • Tyler Sypherd, Mario Diaz, John Kevin Cava, Gautam Dasarathy, Peter Kairouz, Lalitha Sankar
We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification.
no code implementations • 12 Feb 2019 • Tyler Sypherd, Mario Diaz, Lalitha Sankar, Peter Kairouz
We present $\alpha$-loss, $\alpha \in [1,\infty]$, a tunable loss function for binary classification that bridges log-loss ($\alpha=1$) and $0$-$1$ loss ($\alpha = \infty$).