1 code implementation • 31 Oct 2022 • Hyemin Gu, Panagiota Birmpa, Yannis Pantazis, Luc Rey-Bellet, Markos A. Katsoulakis
We build a new class of generative algorithms capable of efficiently learning an arbitrary target distribution from possibly scarce, high-dimensional data and subsequently generate new samples.
1 code implementation • 10 Oct 2022 • Jeremiah Birrell, Yannis Pantazis, Paul Dupuis, Markos A. Katsoulakis, Luc Rey-Bellet
We propose a new family of regularized R\'enyi divergences parametrized not only by the order $\alpha$ but also by a variational function space.
no code implementations • 29 Sep 2021 • Jeremiah Birrell, Markos A. Katsoulakis, Yannis Pantazis, Dipjyoti Paul, Anastasios Tsourtis
Unfortunately, the approximation of expectations that are inherent in variational formulas by statistical averages can be problematic due to high statistical variance, e. g., exponential for the Kullback-Leibler divergence and certain estimators.
no code implementations • 7 Jun 2021 • Michail Fasoulakis, Evangelos Markakis, Yannis Pantazis, Constantinos Varsos
Our work focuses on extra gradient learning algorithms for finding Nash equilibria in bilinear zero-sum games.
no code implementations • 9 Dec 2020 • Anastasios Tsourtis, Yannis Pantazis, Ioannis Tsamardinos
Inferring the driving equations of a dynamical system from population or time-course data is important in several scientific fields such as biochemistry, epidemiology, financial mathematics and many others.
no code implementations • 11 Nov 2020 • Jeremiah Birrell, Paul Dupuis, Markos A. Katsoulakis, Yannis Pantazis, Luc Rey-Bellet
We develop a rigorous and general framework for constructing information-theoretic divergences that subsume both $f$-divergences and integral probability metrics (IPMs), such as the $1$-Wasserstein distance.
1 code implementation • 13 Aug 2020 • Dipjyoti Paul, Muhammed PV Shifas, Yannis Pantazis, Yannis Stylianou
Intelligibility enhancement as quantified by the Intelligibility in Bits (SIIB-Gauss) measure shows that the proposed Lombard-SSDRC TTS system shows significant relative improvement between 110% and 130% in speech-shaped noise (SSN), and 47% to 140% in competing-speaker noise (CSN) against the state-of-the-art TTS approach.
1 code implementation • 9 Aug 2020 • Dipjyoti Paul, Yannis Pantazis, Yannis Stylianou
In terms of performance, our system has been preferred over the baseline TTS system by 60% over 15. 5% and by 60. 9% over 32. 6%, for seen and unseen speakers, respectively.
Ranked #11 on Speech Synthesis on LibriTTS
no code implementations • 15 Jun 2020 • Jeremiah Birrell, Markos A. Katsoulakis, Yannis Pantazis
Recently, they have gained popularity in machine learning as a tractable and scalable approach for training probabilistic models and for statistically differentiating between data distributions.
no code implementations • 13 Jun 2020 • Maria Christina Velli, George D. Tsibidis, Alexandros Mimidis, Evangelos Skoulas, Yannis Pantazis, Emmanuel Stratakis
Predictive modelling represents an emerging field that combines existing and novel methodologies aimed to rapidly understand physical mechanisms and concurrently develop new materials, processes and structures.
no code implementations • 11 Jun 2020 • Yannis Pantazis, Dipjyoti Paul, Michail Fasoulakis, Yannis Stylianou, Markos Katsoulakis
In this paper, we propose a novel loss function for training Generative Adversarial Networks (GANs) aiming towards deeper theoretical understanding as well as improved stability and performance for the underlying optimization problem.
no code implementations • 6 Nov 2018 • Yannis Pantazis, Dipjyoti Paul, Michail Fasoulakis, Yannis Stylianou
The impressive success of Generative Adversarial Networks (GANs) is often overshadowed by the difficulties in their training.