Search Results for author: Juan-Pablo Ortega

Found 26 papers, 4 papers with code

State-Space Systems as Dynamic Generative Models

no code implementations12 Apr 2024 Juan-Pablo Ortega, Florian Rossmannek

The results in this paper constitute a significant stochastic generalization of sufficient conditions for the deterministic echo state property to hold, in the sense that the stochastic echo state property can be satisfied under contractivity conditions that are strictly weaker than those in deterministic situations.

A Structure-Preserving Kernel Method for Learning Hamiltonian Systems

no code implementations15 Mar 2024 Jianyu Hu, Juan-Pablo Ortega, Daiying Yin

A structure-preserving kernel ridge regression method is presented that allows the recovery of potentially high-dimensional and nonlinear Hamiltonian functions out of datasets made of noisy observations of Hamiltonian vector fields.

regression

Invariant kernels on Riemannian symmetric spaces: a harmonic-analytic approach

no code implementations30 Oct 2023 Nathael Da Costa, Cyrus Mostajeran, Juan-Pablo Ortega, Salem Said

This work aims to prove that the classical Gaussian kernel, when defined on a non-Euclidean symmetric space, is never positive-definite for any choice of parameter.

Geometric Learning with Positively Decomposable Kernels

no code implementations20 Oct 2023 Nathael Da Costa, Cyrus Mostajeran, Juan-Pablo Ortega, Salem Said

Classical kernel methods are based on positive-definite kernels, which map data spaces into reproducing kernel Hilbert spaces (RKHS).

Learning multi-modal generative models with permutation-invariant encoders and tighter variational bounds

no code implementations1 Sep 2023 Marcel Hirt, Domenico Campolo, Victoria Leong, Juan-Pablo Ortega

To encode latent variables from different modality subsets, Product-of-Experts (PoE) or Mixture-of-Experts (MoE) aggregation schemes have been routinely used and shown to yield different trade-offs, for instance, regarding their generative quality or consistency across multiple modalities.

Memory of recurrent networks: Do we compute it right?

1 code implementation2 May 2023 Giovanni Ballarin, Lyudmila Grigoryeva, Juan-Pablo Ortega

Numerical evaluations of the memory capacity (MC) of recurrent neural networks reported in the literature often contradict well-established theoretical bounds.

Infinite-dimensional reservoir computing

no code implementations2 Apr 2023 Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega

The reservoir architectures used for the approximation and estimation of elements in the new class are randomly generated echo state networks with either linear or ReLU activation functions.

Generalization Bounds

The Gaussian kernel on the circle and spaces that admit isometric embeddings of the circle

no code implementations21 Feb 2023 Nathaël Da Costa, Cyrus Mostajeran, Juan-Pablo Ortega

On Euclidean spaces, the Gaussian kernel is one of the most widely used kernels in applications.

Reservoir kernels and Volterra series

1 code implementation30 Dec 2022 Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega

A universal kernel is constructed whose sections approximate any causal and time-invariant filter in the fading memory category with inputs and outputs in a finite-dimensional Euclidean space.

Reservoir Computing for Macroeconomic Forecasting with Mixed Frequency Data

1 code implementation1 Nov 2022 Giovanni Ballarin, Petros Dellaportas, Lyudmila Grigoryeva, Marcel Hirt, Sophie van Huellen, Juan-Pablo Ortega

Macroeconomic forecasting has recently started embracing techniques that can deal with large-scale datasets and series with unequal release periods.

Transport in reservoir computing

no code implementations16 Sep 2022 G Manjunath, Juan-Pablo Ortega

Stochastic state contractivity can be satisfied by systems that are not state-contractive, which is a need typically evoked to guarantee the echo state property in reservoir computing.

Expressive Power of Randomized Signature

no code implementations NeurIPS Workshop DLDE 2021 Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann

We consider the question whether the time evolution of controlled differential equations on general state spaces can be arbitrarily well approximated by (regularized) regressions on features generated themselves through randomly chosen dynamical systems of moderately high dimension.

LEMMA Transfer Learning

Learning strange attractors with reservoir systems

no code implementations11 Aug 2021 Lyudmila Grigoryeva, Allen Hart, Juan-Pablo Ortega

This paper shows that the celebrated Embedding Theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of generic observations of an invertible dynamical system carry in their wake an embedding of the phase space dynamics into the chosen Euclidean state space.

Representation Learning

Fading memory echo state networks are universal

no code implementations22 Oct 2020 Lukas Gonon, Juan-Pablo Ortega

Echo state networks (ESNs) have been recently proved to be universal approximants for input/output systems with respect to various $L ^p$-type criteria.

Discrete-time signatures and randomness in reservoir computing

no code implementations17 Sep 2020 Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega, Josef Teichmann

A new explanation of geometric nature of the reservoir computing phenomenon is presented.

Dimension reduction in recurrent networks by canonicalization

no code implementations23 Jul 2020 Lyudmila Grigoryeva, Juan-Pablo Ortega

Many recurrent neural network machine learning paradigms can be formulated using state-space representations.

Dimensionality Reduction

Memory and forecasting capacities of nonlinear recurrent networks

no code implementations22 Apr 2020 Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega

The notion of memory capacity, originally introduced for echo state and linear networks with independent inputs, is generalized to nonlinear recurrent networks with stationary but dependent inputs.

Time Series Time Series Analysis

Approximation Bounds for Random Neural Networks and Reservoir Systems

no code implementations14 Feb 2020 Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega

This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights.

Risk bounds for reservoir computing

no code implementations30 Oct 2019 Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega

We analyze the practices of reservoir computing in the framework of statistical learning theory.

Learning Theory

Differentiable reservoir computing

no code implementations16 Feb 2019 Lyudmila Grigoryeva, Juan-Pablo Ortega

That research is complemented in this paper with the characterization of the differentiability of reservoir filters for very general classes of discrete-time deterministic inputs.

Reservoir Computing Universality With Stochastic Inputs

no code implementations7 Jul 2018 Lukas Gonon, Juan-Pablo Ortega

The universal approximation properties with respect to $L ^p $-type criteria of three important families of reservoir computers with stochastic discrete-time semi-infinite inputs is shown.

Echo state networks are universal

no code implementations3 Jun 2018 Lyudmila Grigoryeva, Juan-Pablo Ortega

This paper shows that echo state networks are universal uniform approximants in the context of discrete-time fading memory filters with uniformly bounded inputs defined on negative infinite times.

valid

Singular ridge regression with homoscedastic residuals: generalization error with estimated parameters

no code implementations29 May 2016 Lyudmila Grigoryeva, Juan-Pablo Ortega

This paper characterizes the conditional distribution properties of the finite sample ridge regression estimator and uses that result to evaluate total regression and generalization errors that incorporate the inaccuracies committed at the time of parameter estimation.

regression

Nonlinear memory capacity of parallel time-delay reservoir computers in the processing of multidimensional signals

no code implementations13 Oct 2015 Lyudmila Grigoryeva, Julie Henriques, Laurent Larger, Juan-Pablo Ortega

This paper addresses the reservoir design problem in the context of delay-based reservoir computers for multidimensional input signals, parallel architectures, and real-time multitasking.

Cannot find the paper you are looking for? You can Submit a new open access paper.