Search Results for author: Tyler Sypherd

Found 7 papers, 1 papers with code

Smoothly Giving up: Robustness for Simple Models

no code implementations17 Feb 2023 Tyler Sypherd, Nathan Stromberg, Richard Nock, Visar Berisha, Lalitha Sankar

There is a growing need for models that are interpretable and have reduced energy and computational cost (e. g., in health care analytics and federated learning).

Federated Learning regression

$α$-GAN: Convergence and Estimation Guarantees

no code implementations12 May 2022 Gowtham R. Kurri, Monica Welfert, Tyler Sypherd, Lalitha Sankar

We prove a two-way correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated $f$-divergences.

Being Properly Improper

no code implementations18 Jun 2021 Tyler Sypherd, Richard Nock, Lalitha Sankar

Hence, optimizing a proper loss function on twisted data could perilously lead the learning algorithm towards the twisted posterior, rather than to the desired clean posterior.

Realizing GANs via a Tunable Loss Function

no code implementations9 Jun 2021 Gowtham R. Kurri, Tyler Sypherd, Lalitha Sankar

We introduce a tunable GAN, called $\alpha$-GAN, parameterized by $\alpha \in (0,\infty]$, which interpolates between various $f$-GANs and Integral Probability Metric based GANs (under constrained discriminator set).

On the alpha-loss Landscape in the Logistic Model

no code implementations22 Jun 2020 Tyler Sypherd, Mario Diaz, Lalitha Sankar, Gautam Dasarathy

We analyze the optimization landscape of a recently introduced tunable class of loss functions called $\alpha$-loss, $\alpha \in (0,\infty]$, in the logistic model.

A Tunable Loss Function for Robust Classification: Calibration, Landscape, and Generalization

1 code implementation5 Jun 2019 Tyler Sypherd, Mario Diaz, John Kevin Cava, Gautam Dasarathy, Peter Kairouz, Lalitha Sankar

We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification.

Classification General Classification +1

A Tunable Loss Function for Binary Classification

no code implementations12 Feb 2019 Tyler Sypherd, Mario Diaz, Lalitha Sankar, Peter Kairouz

We present $\alpha$-loss, $\alpha \in [1,\infty]$, a tunable loss function for binary classification that bridges log-loss ($\alpha=1$) and $0$-$1$ loss ($\alpha = \infty$).

Binary Classification Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.