Learning the principal eigenfunctions of an integral operator defined by a kernel and a data distribution is at the core of many machine learning problems.
Deep Ensemble (DE) is a flexible, feasible, and effective alternative to Bayesian neural networks (BNNs) for uncertainty estimation in deep learning.
In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models.
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments.
In this work, we decouple the training of a network with stochastic architectures (NSA) from NAS and provide a first systematical investigation on it as a stand-alone problem.
Despite their theoretical appealingness, Bayesian neural networks (BNNs) are left behind in real-world adoption, due to persistent concerns on their scalability, accessibility, and reliability.
Despite their theoretical appealingness, Bayesian neural networks (BNNs) are falling far behind in terms of adoption in real-world applications compared with normal NNs, mainly due to their limited scalability in training, and low fidelity in their uncertainty estimates.
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Bayesian neural networks (BNNs) augment deep networks with uncertainty quantification by Bayesian treatment of the network weights.
Bayesian neural networks (BNNs) introduce uncertainty estimation to deep networks by performing Bayesian inference on network weights.
Deep learning methods have shown promise in unsupervised domain adaptation, which aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
Ranked #3 on Domain Adaptation on SVNH-to-MNIST
We present batch virtual adversarial training (BVAT), a novel regularization method for graph convolutional networks (GCNs).
We study the problem of conditional generative modeling based on designated semantics or structures.