Search Results for author: Yikai Zhang

Found 10 papers, 4 papers with code

A Manifold View of Adversarial Risk

no code implementations24 Mar 2022 Wenjia Zhang, Yikai Zhang, Xiaoling Hu, Mayank Goswami, Chao Chen, Dimitris Metaxas

Assuming data lies in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction, and the in-manifold adversarial risk due to perturbation within the manifold.

Learning to Abstain in the Presence of Uninformative Data

no code implementations29 Sep 2021 Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Yuriy Nevmyvaka, Chao Chen

Learning and decision making in domains with naturally high noise-to-signal ratios – such as Finance or Public Health – can be challenging and yet extremely important.

Decision Making Learning Theory

Stability of SGD: Tightness Analysis and Improved Bounds

no code implementations10 Feb 2021 Yikai Zhang, Wenjia Zhang, Sammy Bald, Vamsi Pingali, Chao Chen, Mayank Goswami

This raises the question: is the stability analysis of [18] tight for smooth functions, and if not, for what kind of loss functions and data distributions can the stability analysis be improved?

Training Federated GANs with Theoretical Guarantees: A Universal Aggregation Approach

1 code implementation9 Feb 2021 Yikai Zhang, Hui Qu, Qi Chang, Huidong Liu, Dimitris Metaxas, Chao Chen

A federatedGAN jointly trains a centralized generator and multiple private discriminators hosted at different sites.

Federated Learning

Revisiting the Stability of Stochastic Gradient Descent: A Tightness Analysis

no code implementations1 Jan 2021 Yikai Zhang, Samuel Bald, Wenjia Zhang, Vamsi Pritham Pingali, Chao Chen, Mayank Goswami

We provide empirical evidence that this condition holds for several loss functions, and provide theoretical evidence that the known tight SGD stability bounds for convex and non-convex loss functions can be circumvented by HC loss functions, thus partially explaining the generalization of deep neural networks.

Exponential degradation

Multi-modal AsynDGAN: Learn From Distributed Medical Image Data without Sharing Private Information

no code implementations15 Dec 2020 Qi Chang, Zhennan Yan, Lohendran Baskaran, Hui Qu, Yikai Zhang, Tong Zhang, Shaoting Zhang, Dimitris N. Metaxas

As deep learning technologies advance, increasingly more data is necessary to generate general and robust models for various tasks.

Learn distributed GAN with Temporary Discriminators

1 code implementation ECCV 2020 Hui Qu, Yikai Zhang, Qi Chang, Zhennan Yan, Chao Chen, Dimitris Metaxas

Our proposed method tackles the challenge of training GAN in the federated learning manner: How to update the generator with a flow of temporary discriminators?

Federated Learning

Synthetic Learning: Learn From Distributed Asynchronized Discriminator GAN Without Sharing Medical Image Data

1 code implementation CVPR 2020 Qi Chang, Hui Qu, Yikai Zhang, Mert Sabuncu, Chao Chen, Tong Zhang, Dimitris Metaxas

In this paper, we propose a data privacy-preserving and communication efficient distributed GAN learning framework named Distributed Asynchronized Discriminator GAN (AsynDGAN).

Cannot find the paper you are looking for? You can Submit a new open access paper.