no code implementations • 12 Nov 2023 • Boyang Gu, Anastasia Borovykh
We study whether inputs from the same class can be connected by a continuous path, in original or latent representation space, such that all points on the path are mapped by the neural network model to the same class.
1 code implementation • 29 Sep 2023 • Jiayuan Ye, Anastasia Borovykh, Soufiane Hayou, Reza Shokri
We introduce an analytical framework to quantify the changes in a machine learning algorithm's output distribution following the inclusion of a few data points in its training set, a notion we define as leave-one-out distinguishability (LOOD).
no code implementations • 1 Feb 2023 • Anastasia Borovykh, Nikolas Kantas, Panos Parpas, Greg Pavliotis
The privacy preserving properties of Langevin dynamics with additive isotropic noise have been extensively studied.
no code implementations • 19 Jul 2022 • Anastasia Borovykh, Dante Kalise, Alexis Laignelet, Panos Parpas
A deep learning approach for the approximation of the Hamilton-Jacobi-Bellman partial differential equation (HJB PDE) associated to the Nonlinear Quadratic Regulator (NLQR) problem.
no code implementations • 28 May 2021 • Fan Mo, Anastasia Borovykh, Mohammad Malekzadeh, Soteris Demetriou, Deniz Gündüz, Hamed Haddadi
Our proposed framework enables clients to localize and quantify the private information leakage in a layer-wise manner, and enables a better understanding of the sources of information leakage in collaborative learning, which can be used by future studies to benchmark new attacks and defense mechanisms.
1 code implementation • 25 May 2021 • Mohammad Malekzadeh, Anastasia Borovykh, Deniz Gündüz
It is known that deep neural networks, trained for the classification of non-sensitive target attributes, can reveal sensitive attributes of their input data through internal representations extracted by the classifier.
no code implementations • 17 Oct 2020 • Fan Mo, Anastasia Borovykh, Mohammad Malekzadeh, Hamed Haddadi, Soteris Demetriou
Training deep neural networks via federated learning allows clients to share, instead of the original data, only the model trained on their data.
no code implementations • 15 Jul 2020 • Anastasia Borovykh, Nikolas Kantas, Panos Parpas, Grigorios A. Pavliotis
A second alternative is to use a fixed step-size and run independent replicas of the algorithm and average these.
1 code implementation • 14 Feb 2020 • Remco van der Meer, Cornelis Oosterlee, Anastasia Borovykh
We then derive a choice for the scaling parameter that is optimal with respect to a measure of relative error.
Numerical Analysis Numerical Analysis
no code implementations • 31 Jan 2020 • Shuaiqiang Liu, Álvaro Leitao, Anastasia Borovykh, Cornelis W. Oosterlee
For the implied dividend yield, we formulate the inverse problem as a calibration problem and determine simultaneously the implied volatility and dividend yield.
no code implementations • 18 Dec 2019 • Anastasia Borovykh
We present a novel methodology based on a Taylor expansion of the network output for obtaining analytical expressions for the expected value of the network weights and output under stochastic training.
no code implementations • 23 Apr 2019 • Shuaiqiang Liu, Anastasia Borovykh, Lech A. Grzelak, Cornelis W. Oosterlee
A data-driven approach called CaNN (Calibration Neural Network) is proposed to calibrate financial asset price models using an Artificial Neural Network (ANN).
no code implementations • 14 Feb 2019 • Anastasia Borovykh, Cornelis W. Oosterlee, Sander M. Bohte
In this paper we study the generalization capabilities of fully-connected neural networks trained in the context of time series forecasting.
no code implementations • 25 Oct 2018 • Anastasia Borovykh
In this paper we cast the well-known convolutional neural network in a Gaussian process perspective.
3 code implementations • 14 Mar 2017 • Anastasia Borovykh, Sander Bohte, Cornelis W. Oosterlee
The proposed network contains stacks of dilated convolutions that allow it to access a broad range of history when forecasting, a ReLU activation function and conditioning is performed by applying multiple convolutional filters in parallel to separate time series which allows for the fast processing of data and the exploitation of the correlation structure between the multivariate time series.