Search Results for author: Panos Parpas

Found 8 papers, 0 papers with code

Data-driven initialization of deep learning solvers for Hamilton-Jacobi-Bellman PDEs

no code implementations19 Jul 2022 Anastasia Borovykh, Dante Kalise, Alexis Laignelet, Panos Parpas

A deep learning approach for the approximation of the Hamilton-Jacobi-Bellman partial differential equation (HJB PDE) associated to the Nonlinear Quadratic Regulator (NLQR) problem.

On Flat Minima, Large Margins and Generalizability

no code implementations1 Jan 2021 Daniel Lengyel, Nicholas Jennings, Panos Parpas, Nicholas Kantas

The intuitive connection to robustness and convincing empirical evidence have made the flatness of the loss surface an attractive measure of generalizability for neural networks.

On stochastic mirror descent with interacting particles: convergence properties and variance reduction

no code implementations15 Jul 2020 Anastasia Borovykh, Nikolas Kantas, Panos Parpas, Grigorios A. Pavliotis

A second alternative is to use a fixed step-size and run independent replicas of the algorithm and average these.

Towards Robust and Stable Deep Learning Algorithms for Forward Backward Stochastic Differential Equations

no code implementations25 Oct 2019 Batuhan Güler, Alexis Laignelet, Panos Parpas

Applications in quantitative finance such as optimal trade execution, risk management of options, and optimal asset allocation involve the solution of high dimensional and nonlinear Partial Differential Equations (PDEs).

Management

The sharp, the flat and the shallow: Can weakly interacting agents learn to escape bad minima?

no code implementations10 May 2019 Nikolas Kantas, Panos Parpas, Grigorios A. Pavliotis

As a first step towards understanding this question we formalize it as an optimization problem with weakly interacting agents.

BIG-bench Machine Learning

Predict Globally, Correct Locally: Parallel-in-Time Optimal Control of Neural Networks

no code implementations7 Feb 2019 Panos Parpas, Corey Muir

We exploit the links between dynamical systems, optimal control, and neural networks to develop a novel distributed optimization algorithm.

Distributed Optimization

MAGMA: Multi-level accelerated gradient mirror descent algorithm for large-scale convex composite minimization

no code implementations18 Sep 2015 Vahan Hovhannisyan, Panos Parpas, Stefanos Zafeiriou

Composite convex optimization models arise in several applications, and are especially prevalent in inverse problems with a sparsity inducing norm and in general convex optimization with simple constraints.

Face Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.