Search Results for author: Zissis Poulos

Found 7 papers, 2 papers with code

A Robust Quantile Huber Loss With Interpretable Parameter Adjustment In Distributional Reinforcement Learning

1 code implementation4 Jan 2024 Parvin Malekzadeh, Konstantinos N. Plataniotis, Zissis Poulos, Zeyu Wang

Distributional Reinforcement Learning (RL) estimates return distribution mainly by learning quantile values via minimizing the quantile Huber loss function, entailing a threshold parameter often selected heuristically or via hyperparameter search, which may not generalize well and can be suboptimal.

Atari Games Distributional Reinforcement Learning +1

Gamma and Vega Hedging Using Deep Distributional Reinforcement Learning

1 code implementation10 May 2022 Jay Cao, Jacky Chen, Soroush Farghadani, John Hull, Zissis Poulos, Zeyu Wang, Jun Yuan

We show how D4PG can be used in conjunction with quantile regression to develop a hedging strategy for a trader responsible for derivatives that arrive stochastically and depend on a single underlying asset.

Distributional Reinforcement Learning Position +2

Deep Hedging of Derivatives Using Reinforcement Learning

no code implementations29 Mar 2021 Jay Cao, Jacky Chen, John Hull, Zissis Poulos

This paper shows how reinforcement learning can be used to derive optimal hedging strategies for derivatives when there are transaction costs.

Position reinforcement-learning +1

Deep Learning for Exotic Option Valuation

no code implementations22 Mar 2021 Jay Cao, Jacky Chen, John Hull, Zissis Poulos

We refer to this as the model calibration approach (MCA).

Variational Autoencoders: A Hands-Off Approach to Volatility

no code implementations7 Feb 2021 Maxime Bergeron, Nicholas Fung, John Hull, Zissis Poulos

As a dividend of our first step, the synthetic surfaces produced can also be used in stress testing, in market simulators for developing quantitative investment strategies, and for the valuation of exotic options.

Training CNNs faster with Dynamic Input and Kernel Downsampling

no code implementations15 Oct 2019 Zissis Poulos, Ali Nouri, Andreas Moshovos

We reduce training time in convolutional networks (CNNs) with a method that, for some of the mini-batches: a) scales down the resolution of input images via downsampling, and b) reduces the forward pass operations via pooling on the convolution filters.

Bit-Tactical: Exploiting Ineffectual Computations in Convolutional Neural Networks: Which, Why, and How

no code implementations9 Mar 2018 Alberto Delmas, Patrick Judd, Dylan Malone Stuart, Zissis Poulos, Mostafa Mahmoud, Sayeh Sharify, Milos Nikolic, Andreas Moshovos

We show that, during inference with Convolutional Neural Networks (CNNs), more than 2x to $8x ineffectual work can be exposed if instead of targeting those weights and activations that are zero, we target different combinations of value stream properties.

Cannot find the paper you are looking for? You can Submit a new open access paper.