Assuming data lies in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction, and the in-manifold adversarial risk due to perturbation within the manifold.
Learning and decision making in domains with naturally high noise-to-signal ratios – such as Finance or Public Health – can be challenging and yet extremely important.
Label noise is frequently observed in real-world large-scale datasets.
Ranked #3 on Learning with noisy labels on ANIMAL
This raises the question: is the stability analysis of  tight for smooth functions, and if not, for what kind of loss functions and data distributions can the stability analysis be improved?
A federatedGAN jointly trains a centralized generator and multiple private discriminators hosted at different sites.
We provide empirical evidence that this condition holds for several loss functions, and provide theoretical evidence that the known tight SGD stability bounds for convex and non-convex loss functions can be circumvented by HC loss functions, thus partially explaining the generalization of deep neural networks.
As deep learning technologies advance, increasingly more data is necessary to generate general and robust models for various tasks.
Our proposed method tackles the challenge of training GAN in the federated learning manner: How to update the generator with a flow of temporary discriminators?
In this paper, we propose a data privacy-preserving and communication efficient distributed GAN learning framework named Distributed Asynchronized Discriminator GAN (AsynDGAN).