1 code implementation • 4 Apr 2024 • Lorena Gallego-Viñarás, Juan Miguel Mira-Tomás, Anna Michela-Gaeta, Gerard Pinol-Ripoll, Ferrán Barbé, Pablo M. Olmos, Arrate Muñoz-Barrutia
This study delves into the potential of utilizing sleep-related electroencephalography (EEG) signals acquired through polysomnography (PSG) for the early detection of AD.
1 code implementation • 26 Feb 2024 • José Manuel de Frutos, Pablo M. Olmos, Manuel A. Vázquez, Joaquín Míguez
In this work, we develop a discriminator-free method for training one-dimensional (1D) generative implicit models and subsequently expand this method to accommodate multivariate cases.
1 code implementation • 21 Jan 2024 • Elias Abad Rocamora, Fanghui Liu, Grigorios G. Chrysos, Pablo M. Olmos, Volkan Cevher
Our regularization term can be theoretically linked to curvature of the loss function and is computationally cheaper than previous methods by avoiding Double Backpropagation.
no code implementations • 18 Oct 2023 • Óscar Jiménez Rama, Fernando Moreno-Pino, David Ramírez, Pablo M. Olmos
The best encoding is the one that is interpretable in nature.
1 code implementation • 13 Feb 2023 • Batuhan Koyuncu, Pablo Sanchez-Martin, Ignacio Peis, Pablo M. Olmos, Isabel Valera
Recent approaches build on implicit neural representations (INRs) to propose generative models over function spaces.
no code implementations • 17 Jan 2023 • María Martínez-García, Fernando Moreno-Pino, Pablo M. Olmos, Antonio Artés-Rodríguez
Sleep constitutes a key indicator of human health, performance, and quality of life.
1 code implementation • 15 Nov 2022 • Antía López Galdo, Alejandro Guerrero-López, Pablo M. Olmos, María Jesús Gómez García
Railway axle maintenance is critical to avoid catastrophic failures.
no code implementations • 8 Nov 2022 • Fernando Moreno-Pino, María Martínez-García, Pablo M. Olmos, Antonio Artés-Rodríguez
Psychiatric patients' passive activity monitoring is crucial to detect behavioural shifts in real-time, comprising a tool that helps clinicians supervise patients' evolution over time and enhance the associated treatments' outcomes.
1 code implementation • 19 Jul 2022 • Alejandro Guerrero-López, Carlos Sevilla-Salcedo, Vanessa Gómez-Verdejo, Pablo M. Olmos
For this purpose, recent studies based on deep generative models merge all views into a nonlinear complex latent space, which can share information among views.
no code implementations • 13 Jan 2022 • Carlos Sevilla-Salcedo, Vandad Imani, Pablo M. Olmos, Vanessa Gómez-Verdejo, Jussi Tohka
Machine learning techniques typically applied to dementia forecasting lack in their capabilities to jointly learn several tasks, handle time dependent heterogeneous data and missing values.
1 code implementation • 12 Jan 2022 • Fernando Moreno-Pino, Emese Sükei, Pablo M. Olmos, Antonio Artés-Rodríguez
We introduce PyHHMM, an object-oriented open-source Python implementation of Heterogeneous-Hidden Markov Models (HHMMs).
1 code implementation • 13 Jul 2021 • Fernando Moreno-Pino, Pablo M. Olmos, Antonio Artés-Rodríguez
In this paper, we propose a forecasting architecture that combines deep autoregressive models with a Spectral Attention (SA) module, which merges global and local frequency domain information in the model's embedded space.
1 code implementation • 12 Mar 2021 • Daniel Barrejón, Pablo M. Olmos, Antonio Artés-Rodríguez
Medical data sets are usually corrupted by noise and missing data.
1 code implementation • 15 Dec 2020 • Ignacio Peis, Pablo M. Olmos, Antonio Artés-Rodríguez
We present a novel deep generative model based on non i. i. d.
no code implementations • 4 Dec 2020 • José Carlos Aradillas, Juan José Murillo-Fuentes, Pablo M. Olmos
In this paper, we face the problem of offline handwritten text recognition (HTR) in historical documents when few labeled samples are available and some of them contain errors in the train set.
no code implementations • 1 Jun 2020 • Carlos Sevilla-Salcedo, Alejandro Guerrero-López, Pablo M. Olmos, Vanessa Gómez-Verdejo
In particular, we combine probabilistic factor analysis with what we refer to as kernelized observations, in which the model focuses on reconstructing not the data itself, but its relationship with other data points measured by a kernel function.
1 code implementation • 24 Jan 2020 • Carlos Sevilla-Salcedo, Vanessa Gómez-Verdejo, Pablo M. Olmos
The Bayesian approach to feature extraction, known as factor analysis (FA), has been widely studied in machine learning to obtain a latent representation of the data.
no code implementations • 6 Nov 2019 • Ignacio Peis, Pablo M. Olmos, Constanza Vera-Varela, María Luisa Barrigón, Philippe Courtet, Enrique Baca-García, Antonio Artés-Rodríguez
This article presents a novel method for predicting suicidal ideation from Electronic Health Records (EHR) and Ecological Momentary Assessment (EMA) data using deep sequential models.
1 code implementation • 4 Nov 2019 • Pablo Sánchez-Martín, Pablo M. Olmos, Fernando Perez-Cruz
We propose a novel training procedure for improving the performance of generative adversarial networks (GANs), especially to bidirectional GANs.
no code implementations • 15 Oct 2019 • Fernando Perez-Cruz, Pablo M. Olmos, Michael Minyi Zhang, Howard Huang
In this paper, we take a new approach for time of arrival geo-localization.
1 code implementation • 28 Jan 2019 • Pablo Sánchez-Martín, Pablo M. Olmos, Fernando Pérez-Cruz
We propose a new method to evaluate GANs, namely EvalGAN.
2 code implementations • 10 Jul 2018 • Alfredo Nazabal, Pablo M. Olmos, Zoubin Ghahramani, Isabel Valera
Variational autoencoders (VAEs), as well as other generative models, have been shown to be efficient and accurate for capturing the latent structure of vast amounts of complex high-dimensional data.
3 code implementations • 4 Apr 2018 • José Carlos Aradillas, Juan José Murillo-Fuentes, Pablo M. Olmos
We first investigate, for a reduced and fixed number of training samples, 350 lines, how the learning from a large database, the IAM, can be transferred to the learning of the CLC of a reduced database, Washington.
no code implementations • ICLR 2018 • Pablo M. Olmos, Briland Hitaj, Paolo Gasti, Giuseppe Ateniese, Fernando Perez-Cruz
In this paper, we noticed that even though GANs might not be able to generate samples from the underlying distribution (or we cannot tell at least), they are capturing some structure of the data in that high dimensional space.
no code implementations • NeurIPS 2011 • Pablo M. Olmos, Luis Salamanca, Juan Fuentes, Fernando Pérez-Cruz
We show an application of a tree structure for approximate inference in graphical models using the expectation propagation algorithm.