no code implementations • 29 Sep 2023 • Christopher P. Ley, Felipe Tobar
We introduce the asynchronous graph generator (AGG), a novel graph neural network architecture for multi-channel time series which models observations as nodes on a dynamic graph and can thus perform data imputation by transductive node generation.
no code implementations • 14 Aug 2023 • Jou-Hui Ho, Felipe Tobar
Standard online change point detection (CPD) methods tend to have large false discovery rates as their detections are sensitive to outliers.
1 code implementation • 8 May 2023 • Felipe Tobar, Arnaud Robert, Jorge F. Silva
Let us consider the deconvolution problem, that is, to recover a latent source $x(\cdot)$ from the observations $\mathbf{y} = [y_1,\ldots, y_N]$ of a convolution process $y = x\star h + \eta$, where $\eta$ is an additive noise, the observations in $\mathbf{y}$ might have missing parts with respect to $y$, and the filter $h$ could be unknown.
1 code implementation • 11 Oct 2022 • Felipe Tobar, Elsa Cazelles, Taco de Wolff
We present a computationally-efficient strategy to initialise the hyperparameters of a Gaussian process (GP) avoiding the computation of the likelihood function.
no code implementations • 18 Feb 2022 • Matías Altamirano, Felipe Tobar
Kernel design for Multi-output Gaussian Processes (MOGP) has received increased attention recently.
no code implementations • 30 Dec 2021 • Jorge F. Silva, Felipe Tobar, Mario Vicuña, Felipe Cordova
From this, our main result shows that a specific form of vanishing information loss (a weak notion of asymptotic informational sufficiency) implies a vanishing MPE loss (or asymptotic operational sufficiency) when considering a general family of lossy continuous representations.
no code implementations • 5 Oct 2021 • Bryan Sagredo, Sonia Español-Jiménez, Felipe Tobar
We present a framework for detecting blue whale vocalisations from acoustic submarine recordings.
no code implementations • 5 Oct 2021 • Alejandro Cuevas, Sebastián López, Danilo Mandic, Felipe Tobar
Autoregressive (AR) time series models are widely used in parametric spectral estimation (SE), where the power spectral density (PSD) of the time series is approximated by that of the \emph{best-fit} AR model, which is available in closed form.
1 code implementation • 5 Oct 2021 • Diego León, Felipe Tobar
In real-world settings, speech signals are almost always affected by reverberation produced by the working environment; these corrupted signals need to be \emph{dereverberated} prior to performing, e. g., speech recognition, speech-to-text conversion, compression, or general audio enhancement.
no code implementations • NeurIPS 2021 • Elsa Cazelles, Felipe Tobar, Joaquín Fontbona
We introduce weak barycenters of a family of probability distributions, based on the recently developed notion of optimal weak transport of mass by Gozlanet al. (2017) and Backhoff-Veraguas et al. (2020).
1 code implementation • 9 Nov 2020 • Felipe Tobar, Lerko Araya-Hernández, Pablo Huijse, Petar M. Djurić
Our aim is to address the lack of a principled treatment of data acquired indistinctly in the temporal and frequency domains in a way that is robust to missing or noisy observations, and that at the same time models uncertainty effectively.
no code implementations • 11 Feb 2020 • Taco de Wolff, Alejandro Cuevas, Felipe Tobar
In Financial Signal Processing, multiple time series such as financial indicators, stock prices and exchange rates are strongly coupled due to their dependence on the latent state of the market and therefore they are required to be jointly analysed.
1 code implementation • 9 Feb 2020 • Taco de Wolff, Alejandro Cuevas, Felipe Tobar
We present MOGPTK, a Python package for multi-channel data modelling using Gaussian processes (GP).
1 code implementation • 11 Dec 2019 • Elsa Cazelles, Arnaud Robert, Felipe Tobar
The WF distance operates by calculating the Wasserstein distance between the (normalised) power spectral densities (NPSD) of time series.
no code implementations • NeurIPS 2019 • Felipe Tobar
We propose a novel class of Gaussian processes (GPs) whose spectra have compact support, meaning that their sample trajectories are almost-surely band limited.
no code implementations • 23 Jun 2019 • Gonzalo Rios, Felipe Tobar
The Gaussian process (GP) is a nonparametric prior distribution over functions indexed by time, space, or other high-dimensional index set.
no code implementations • 9 Feb 2019 • Cristobal Valenzuela, Felipe Tobar
We propose a Bayesian nonparametric method for low-pass filtering that can naturally handle unevenly-sampled and noise-corrupted observations.
1 code implementation • NeurIPS 2018 • Felipe Tobar
Spectral estimation (SE) aims to identify how the energy of a signal (e. g., a time series) is distributed across different frequencies.
no code implementations • 28 May 2018 • Julio Backhoff-Veraguas, Joaquin Fontbona, Gonzalo Rios, Felipe Tobar
We introduce and study a novel model-selection strategy for Bayesian learning, based on optimal transport, along with its associated predictive posterior law: the Wasserstein population barycenter of the posterior law over models.
no code implementations • 19 Mar 2018 • Gonzalo Rios, Felipe Tobar
Gaussian processes (GPs) are Bayesian nonparametric generative models that provide interpretability of hyperparameters, admit closed-form expressions for training and inference, and are able to accurately represent uncertainty.
no code implementations • NeurIPS 2017 • Gabriel Parra, Felipe Tobar
Early approaches to multiple-output Gaussian processes (MOGPs) relied on linear combinations of independent, latent, single-output Gaussian processes (GPs).
no code implementations • 19 Jul 2017 • Felipe Tobar, Gonzalo Rios, Tomás Valdivia, Pablo Guerrero
The proposed model is validated in the recovery of three signals: a smooth synthetic signal, a real-world heart-rate time series and a step function, where GPMM outperformed the standard GP in terms of estimation error, uncertainty representation and recovery of the spectral content of the latent signal.
no code implementations • 13 Jul 2017 • Felipe Tobar
Kernel adaptive filters, a class of adaptive nonlinear time-series models, are known by their ability to learn expressive autoregressive patterns from sequential data.
no code implementations • 11 Jul 2017 • Iván Castro, Cristóbal Silva, Felipe Tobar
We present a probabilistic framework for both (i) determining the initial settings of kernel adaptive filters (KAFs) and (ii) constructing fully-adaptive KAFs whereby in addition to weights and dictionaries, kernel parameters are learnt sequentially.
no code implementations • NeurIPS 2015 • Felipe Tobar, Thang D. Bui, Richard E. Turner
We introduce the Gaussian Process Convolution Model (GPCM), a two-stage nonparametric generative procedure to model stationary signals as the convolution between a continuous-time white-noise process and a continuous-time linear filter drawn from Gaussian process.