no code implementations • 29 Apr 2024 • Gabriel Turinici
Physics-informed neural networks (PINN) is a extremely powerful paradigm used to solve equations encountered in scientific computing applications.
no code implementations • 4 Mar 2024 • Pierre Brugiere, Gabriel Turinici
The transformer models have been extensively used with good results in a wide area of machine learning applications including Large Language Models and image generation.
no code implementations • 9 Feb 2024 • Stefana Anita, Gabriel Turinici
We present a self-contained proof of the convergence rate of the Stochastic Gradient Descent (SGD) when the learning rate follows an inverse time decays schedule; we next apply the results to the convergence of a modified form of policy gradient Multi-Armed Bandit (MAB) with $L2$ regularization.
no code implementations • 8 Dec 2023 • Gabriel Turinici, Pierre Brugiere
We introduce Onflow, a reinforcement learning technique that enables online optimization of portfolio allocation policies based on gradient flows.
no code implementations • 22 Nov 2023 • Gabriel Turinici
The Cover universal portfolio (UP from now on) has many interesting theoretical and numerical properties and was investigated for a long time.
no code implementations • 17 Jan 2023 • Gabriel Turinici
Quantization of a probability measure means representing it with a finite set of Dirac masses that approximates the input distribution well enough (in some metric space of probability measures).
no code implementations • 15 Dec 2022 • Gabriel Turinici
We describe a measure quantization procedure i. e., an algorithm which finds the best approximation of a target probability law (and more generally signed finite variation measure) by a sum of $Q$ Dirac masses ($Q$ being the quantization parameter).
no code implementations • 19 Feb 2022 • Gabriel Turinici
The decoder-based machine learning generative algorithms such as Generative Adversarial Networks (GAN), Variational Auto-Encoders (VAE), Transformers show impressive results when constructing objects similar to those in a training ensemble.
no code implementations • 7 Feb 2022 • Gabriel Turinici
Generative deep neural networks used in machine learning, like the Variational Auto-Encoders (VAE), and Generative Adversarial Networks (GANs) produce new objects each time when asked to do so with the constraint that the new objects remain similar to some list of examples given as input.
no code implementations • 18 Dec 2020 • Gabriel Turinici
Fitting neural networks often resorts to stochastic (or similar) gradient descent which is a noise-tolerant (and efficient) resolution of a gradient descent dynamics.
no code implementations • 15 Dec 2020 • Gabriel Turinici
Deterministic compartmental models have been used extensively in modeling epidemic propagation.
no code implementations • 20 Feb 2020 • Imen Ayadi, Gabriel Turinici
The minimization of the loss function is of paramount importance in deep neural networks.
no code implementations • 29 Nov 2019 • Gabriel Turinici
The quality of generative models (such as Generative adversarial networks and Variational Auto-Encoders) depends heavily on the choice of a good probability distance.
no code implementations • 7 Jun 2019 • Gabriel Turinici
In quantum control, the robustness with respect to uncertainties in the system's parameters or driving field characteristics is of paramount importance and has been studied theoretically, numerically and experimentally.