no code implementations • 7 Sep 2023 • Jeremiah Birrell, MohammadReza Ebrahimi
We introduce the $ARMOR_D$ methods as novel approaches to enhancing the adversarial robustness of deep learning models.
1 code implementation • 10 Oct 2022 • Jeremiah Birrell, Yannis Pantazis, Paul Dupuis, Markos A. Katsoulakis, Luc Rey-Bellet
We propose a new family of regularized R\'enyi divergences parametrized not only by the order $\alpha$ but also by a variational function space.
no code implementations • 2 Feb 2022 • Jeremiah Birrell, Markos A. Katsoulakis, Luc Rey-Bellet, Wei Zhu
Generative adversarial networks (GANs), a class of distribution-learning methods based on a two-player game between a generator and a discriminator, can generally be formulated as a minmax problem based on the variational representation of a divergence between the unknown and the generated distributions.
no code implementations • 29 Sep 2021 • Jeremiah Birrell, Markos A. Katsoulakis, Yannis Pantazis, Dipjyoti Paul, Anastasios Tsourtis
Unfortunately, the approximation of expectations that are inherent in variational formulas by statistical averages can be problematic due to high statistical variance, e. g., exponential for the Kullback-Leibler divergence and certain estimators.
no code implementations • 11 Nov 2020 • Jeremiah Birrell, Paul Dupuis, Markos A. Katsoulakis, Yannis Pantazis, Luc Rey-Bellet
We develop a rigorous and general framework for constructing information-theoretic divergences that subsume both $f$-divergences and integral probability metrics (IPMs), such as the $1$-Wasserstein distance.
1 code implementation • 7 Jul 2020 • Jeremiah Birrell, Paul Dupuis, Markos A. Katsoulakis, Luc Rey-Bellet, Jie Wang
We further show that this R\'enyi variational formula holds over a range of function spaces; this leads to a formula for the optimizer under very weak assumptions and is also key in our development of a consistency theory for R\'enyi divergence estimators.
no code implementations • 15 Jun 2020 • Jeremiah Birrell, Markos A. Katsoulakis, Yannis Pantazis
Recently, they have gained popularity in machine learning as a tractable and scalable approach for training probabilistic models and for statistically differentiating between data distributions.