no code implementations • 24 Mar 2022 • Wenjia Zhang, Yikai Zhang, Xiaoling Hu, Mayank Goswami, Chao Chen, Dimitris Metaxas
Assuming data lies in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction, and the in-manifold adversarial risk due to perturbation within the manifold.
no code implementations • 29 Sep 2021 • Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Yuriy Nevmyvaka, Chao Chen
Learning and decision making in domains with naturally high noise-to-signal ratios – such as Finance or Public Health – can be challenging and yet extremely important.
no code implementations • NeurIPS 2021 • Songzhu Zheng, Yikai Zhang, Hubert Wagner, Mayank Goswami, Chao Chen
Deep neural networks are known to have security issues.
1 code implementation • ICLR 2021 • Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Mayank Goswami, Chao Chen
Label noise is frequently observed in real-world large-scale datasets.
Ranked #3 on
Learning with noisy labels
on ANIMAL
no code implementations • 10 Feb 2021 • Yikai Zhang, Wenjia Zhang, Sammy Bald, Vamsi Pingali, Chao Chen, Mayank Goswami
This raises the question: is the stability analysis of [18] tight for smooth functions, and if not, for what kind of loss functions and data distributions can the stability analysis be improved?
1 code implementation • 9 Feb 2021 • Yikai Zhang, Hui Qu, Qi Chang, Huidong Liu, Dimitris Metaxas, Chao Chen
A federatedGAN jointly trains a centralized generator and multiple private discriminators hosted at different sites.
no code implementations • 1 Jan 2021 • Yikai Zhang, Samuel Bald, Wenjia Zhang, Vamsi Pritham Pingali, Chao Chen, Mayank Goswami
We provide empirical evidence that this condition holds for several loss functions, and provide theoretical evidence that the known tight SGD stability bounds for convex and non-convex loss functions can be circumvented by HC loss functions, thus partially explaining the generalization of deep neural networks.
no code implementations • 15 Dec 2020 • Qi Chang, Zhennan Yan, Lohendran Baskaran, Hui Qu, Yikai Zhang, Tong Zhang, Shaoting Zhang, Dimitris N. Metaxas
As deep learning technologies advance, increasingly more data is necessary to generate general and robust models for various tasks.
1 code implementation • ECCV 2020 • Hui Qu, Yikai Zhang, Qi Chang, Zhennan Yan, Chao Chen, Dimitris Metaxas
Our proposed method tackles the challenge of training GAN in the federated learning manner: How to update the generator with a flow of temporary discriminators?
1 code implementation • CVPR 2020 • Qi Chang, Hui Qu, Yikai Zhang, Mert Sabuncu, Chao Chen, Tong Zhang, Dimitris Metaxas
In this paper, we propose a data privacy-preserving and communication efficient distributed GAN learning framework named Distributed Asynchronized Discriminator GAN (AsynDGAN).