no code implementations • 22 May 2023 • Ziyu Chen, Markos A. Katsoulakis, Luc Rey-Bellet, Wei Zhu
Group-invariant generative adversarial networks (GANs) are a type of GANs in which the generators and discriminators are hardwired with group symmetries.
no code implementations • 3 Feb 2023 • Ziyu Chen, Markos A. Katsoulakis, Luc Rey-Bellet, Wei Zhu
We rigorously quantify the improvement in the sample complexity of variational divergence estimations for group-invariant distributions.
1 code implementation • 31 Oct 2022 • Hyemin Gu, Panagiota Birmpa, Yannis Pantazis, Luc Rey-Bellet, Markos A. Katsoulakis
We build a new class of generative algorithms capable of efficiently learning an arbitrary target distribution from possibly scarce, high-dimensional data and subsequently generate new samples.
1 code implementation • 10 Oct 2022 • Jeremiah Birrell, Yannis Pantazis, Paul Dupuis, Markos A. Katsoulakis, Luc Rey-Bellet
We propose a new family of regularized R\'enyi divergences parametrized not only by the order $\alpha$ but also by a variational function space.
no code implementations • 2 Feb 2022 • Jeremiah Birrell, Markos A. Katsoulakis, Luc Rey-Bellet, Wei Zhu
Generative adversarial networks (GANs), a class of distribution-learning methods based on a two-player game between a generator and a discriminator, can generally be formulated as a minmax problem based on the variational representation of a divergence between the unknown and the generated distributions.
no code implementations • 17 Jul 2021 • Panagiota Birmpa, Jinchao Feng, Markos A. Katsoulakis, Luc Rey-Bellet
Probabilistic graphical models are a fundamental tool in probabilistic modeling, machine learning and artificial intelligence.
no code implementations • 11 Nov 2020 • Jeremiah Birrell, Paul Dupuis, Markos A. Katsoulakis, Yannis Pantazis, Luc Rey-Bellet
We develop a rigorous and general framework for constructing information-theoretic divergences that subsume both $f$-divergences and integral probability metrics (IPMs), such as the $1$-Wasserstein distance.
1 code implementation • 7 Jul 2020 • Jeremiah Birrell, Paul Dupuis, Markos A. Katsoulakis, Luc Rey-Bellet, Jie Wang
We further show that this R\'enyi variational formula holds over a range of function spaces; this leads to a formula for the optimizer under very weak assumptions and is also key in our development of a consistency theory for R\'enyi divergence estimators.