1 code implementation • NeurIPS 2023 • Verónica Álvarez, Santiago Mazuelas, Jose A. Lozano
For a sequence of classification tasks that arrive over time, it is common that tasks are evolving in the sense that consecutive tasks often have a higher similarity.
1 code implementation • 11 Jun 2023 • Kartheek Bondugula, Santiago Mazuelas, Aritz Pérez
High-dimensional data is common in multiple areas, such as health care and genomics, where the number of features can be tens of thousands.
no code implementations • 23 May 2023 • Yuxiao Li, Santiago Mazuelas, Yuan Shen
Localization systems based on ultra-wide band (UWB) measurements can have unsatisfactory performance in harsh environments due to the presence of non-line-of-sight (NLOS) errors.
no code implementations • 23 May 2023 • Yuxiao Li, Santiago Mazuelas, Yuan Shen
Radio frequency (RF)-based techniques are widely adopted for indoor localization despite the challenges in extracting sufficient information from measurements.
no code implementations • 23 May 2023 • Yuxiao Li, Santiago Mazuelas, Yuan Shen
Ultra-wideband (UWB)-based techniques, while becoming mainstream approaches for high-accurate positioning, tend to be challenged by ranging bias in harsh environments.
no code implementations • 23 May 2023 • Yuxiao Li, Santiago Mazuelas, Yuan Shen
In particular, we present a Bayesian model for the generative process of the received waveform composed by latent variables for both range-related features and environment semantics.
no code implementations • 23 May 2023 • Yuxiao Li, Santiago Mazuelas, Yuan Shen
Deep generative models (DGMs) and their conditional counterparts provide a powerful ability for general-purpose generative modeling of data distributions.
1 code implementation • 15 May 2023 • José I. Segovia-Martín, Santiago Mazuelas, Anqi Liu
Supervised learning is often affected by a covariate shift in which the marginal distributions of instances (covariates $x$) of training and testing samples $\mathrm{p}_\text{tr}(x)$ and $\mathrm{p}_\text{te}(x)$ are different but the label conditionals coincide.
1 code implementation • 31 May 2022 • Verónica Álvarez, Santiago Mazuelas, Jose A. Lozano
The statistical characteristics of instance-label pairs often change with time in practical scenarios of supervised classification.
no code implementations • 17 Jan 2022 • Santiago Mazuelas, Mauricio Romero, Peter Grünwald
Supervised classification techniques use training samples to learn a classification rule with small expected 0-1 loss (error probability).
1 code implementation • 4 Aug 2021 • Kartheek Bondugula, Verónica Álvarez, José I. Segovia-Martín, Aritz Pérez, Santiago Mazuelas
MRCpy provides a unified interface for different variants of MRCs and follows the standards of popular Python libraries.
1 code implementation • 30 Nov 2020 • Verónica Álvarez, Santiago Mazuelas, José A. Lozano
Conventional load forecasting techniques obtain single-value load forecasts by exploiting consumption patterns of past load demand.
2 code implementations • NeurIPS 2020 • Santiago Mazuelas, Andrea Zanoni, Aritz Perez
We also present MRCs' finite-sample generalization bounds in terms of training size and smallest minimax risk, and show their competitive classification performance w. r. t.
1 code implementation • 10 Jul 2020 • Santiago Mazuelas, Yuan Shen, Aritz Pérez
The maximum entropy principle advocates to evaluate events' probabilities using a distribution that maximizes entropy among those that satisfy certain expectations' constraints.
no code implementations • 2 Feb 2019 • Santiago Mazuelas, Andrea Zanoni, Aritz Perez
Conventional techniques for supervised classification constrain the classification rules considered and use surrogate losses for classification 0-1 loss.
no code implementations • 24 Jan 2019 • Santiago Mazuelas, Aritz Perez
Different types of training data have led to numerous schemes for supervised classification.